Tag Archives: qubits

IBM weighs in with plans for a 7nm computer chip

On the heels of Intel’s announcement about a deal utilizing their 14nm low-power manufacturing process and speculations about a 10nm computer chip (my July 9, 2014 posting), IBM makes an announcement about a 7nm chip as per this July 10, 2014 news item on Azonano,

IBM today [July 10, 2014] announced it is investing $3 billion over the next 5 years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments will push IBM’s semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

A very comprehensive July 10, 2014 news release lays out the company’s plans for this $3B investment representing 10% of IBM’s total research budget,

The first research program is aimed at so-called “7 nanometer and beyond” silicon technology that will address serious physical challenges that are threatening current semiconductor scaling techniques and will impede the ability to manufacture such chips. The second is focused on developing alternative technologies for post-silicon era chips using entirely different approaches, which IBM scientists and other experts say are required because of the physical limitations of silicon based semiconductors.

Cloud and big data applications are placing new challenges on systems, just as the underlying chip technology is facing numerous significant physical scaling limits.  Bandwidth to memory, high speed communication and device power consumption are becoming increasingly challenging and critical.

The teams will comprise IBM Research scientists and engineers from Albany and Yorktown, New York; Almaden, California; and Europe. In particular, IBM will be investing significantly in emerging areas of research that are already underway at IBM such as carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing. [emphasis mine]

These teams will focus on providing orders of magnitude improvement in system level performance and energy efficient computing. In addition, IBM will continue to invest in the nanosciences and quantum computing–two areas of fundamental science where IBM has remained a pioneer for over three decades.

7 nanometer technology and beyond

IBM Researchers and other semiconductor experts predict that while challenging, semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.  However, scaling to 7 nanometers and perhaps below, by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing.

“The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?” said John Kelly, senior vice president, IBM Research. “IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems. This new investment will ensure that we produce the necessary innovations to meet these challenges.”

“Scaling to 7nm and below is a terrific challenge, calling for deep physics competencies in processing nano materials affinities and characteristics. IBM is one of a very few companies who has repeatedly demonstrated this level of science and engineering expertise,” said Richard Doherty, technology research director, The Envisioneering Group.

Bridge to a “Post-Silicon” Era

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.

With virtually all electronic equipment today built on complementary metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.

Beyond 7 nanometers, the challenges dramatically increase, requiring a new kind of material to power systems of the future, and new computing platforms to solve problems that are unsolvable or difficult to solve today. Potential alternatives include new materials such as carbon nanotubes, and non-traditional computational approaches such as neuromorphic computing, cognitive computing, machine learning techniques, and the science behind quantum computing.

As the leader in advanced schemes that point beyond traditional silicon-based computing, IBM holds over 500 patents for technologies that will drive advancements at 7nm and beyond silicon — more than twice the nearest competitor. These continued investments will accelerate the invention and introduction into product development for IBM’s highly differentiated computing systems for cloud, and big data analytics.

Several exploratory research breakthroughs that could lead to major advancements in delivering dramatically smaller, faster and more powerful computer chips, include quantum computing, neurosynaptic computing, silicon photonics, carbon nanotubes, III-V technologies, low power transistors and graphene:

Quantum Computing

The most basic piece of information that a typical computer understands is a bit. Much like a light that can be switched on or off, a bit can have only one of two values: “1” or “0.” Described as superposition, this special property of qubits enables quantum computers to weed through millions of solutions all at once, while desktop PCs would have to consider them one at a time.

IBM is a world leader in superconducting qubit-based quantum computing science and is a pioneer in the field of experimental and theoretical quantum information, fields that are still in the category of fundamental science – but one that, in the long term, may allow the solution of problems that are today either impossible or impractical to solve using conventional machines. The team recently demonstrated the first experimental realization of parity check with three superconducting qubits, an essential building block for one type of quantum computer.

Neurosynaptic Computing

Bringing together nanoscience, neuroscience, and supercomputing, IBM and university partners have developed an end-to-end ecosystem including a novel non-von Neumann architecture, a new programming language, as well as applications. This novel technology allows for computing systems that emulate the brain’s computing efficiency, size and power usage. IBM’s long-term goal is to build a neurosynaptic system with ten billion neurons and a hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.

Silicon Photonics

IBM has been a pioneer in the area of CMOS integrated silicon photonics for over 12 years, a technology that integrates functions for optical communications on a silicon chip, and the IBM team has recently designed and fabricated the world’s first monolithic silicon photonics based transceiver with wavelength division multiplexing.  Such transceivers will use light to transmit data between different components in a computing system at high data rates, low cost, and in an energetically efficient manner.

Silicon nanophotonics takes advantage of pulses of light for communication rather than traditional copper wiring and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large datacenters, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.

Businesses are entering a new era of computing that requires systems to process and analyze, in real-time, huge volumes of information known as Big Data. Silicon nanophotonics technology provides answers to Big Data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.

III-V technologies

IBM researchers have demonstrated the world’s highest transconductance on a self-aligned III-V channel metal-oxide semiconductor (MOS) field-effect transistors (FETs) device structure that is compatible with CMOS scaling. These materials and structural innovation are expected to pave path for technology scaling at 7nm and beyond.  With more than an order of magnitude higher electron mobility than silicon, integrating III-V materials into CMOS enables higher performance at lower power density, allowing for an extension to power/performance scaling to meet the demands of cloud computing and big data systems.

Carbon Nanotubes

IBM Researchers are working in the area of carbon nanotube (CNT) electronics and exploring whether CNTs can replace silicon beyond the 7 nm node.  As part of its activities for developing carbon nanotube based CMOS VLSI circuits, IBM recently demonstrated — for the first time in the world — 2-way CMOS NAND gates using 50 nm gate length carbon nanotube transistors.

IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99 percent, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling–this is unmatched by any other material system to date.

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power data-crunching servers, high performing computers and ultra fast smart phones.

Carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.

Graphene

Graphene is pure carbon in the form of a one atomic layer thick sheet.  It is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible.  Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. Its characteristics offer the possibility to build faster switching transistors than are possible with conventional semiconductors, particularly for applications in the handheld wireless communications business where it will be a more efficient switch than those currently used.

Recently in 2013, IBM demonstrated the world’s first graphene based integrated circuit receiver front end for wireless communications. The circuit consisted of a 2-stage amplifier and a down converter operating at 4.3 GHz.

Next Generation Low Power Transistors

In addition to new materials like CNTs, new architectures and innovative device concepts are required to boost future system performance. Power dissipation is a fundamental challenge for nanoelectronic circuits. To explain the challenge, consider a leaky water faucet — even after closing the valve as far as possible water continues to drip — this is similar to today’s transistor, in that energy is constantly “leaking” or being lost or wasted in the off-state.

A potential alternative to today’s power hungry silicon field effect transistors are so-called steep slope devices. They could operate at much lower voltage and thus dissipate significantly less power. IBM scientists are researching tunnel field effect transistors (TFETs). In this special type of transistors the quantum-mechanical effect of band-to-band tunneling is used to drive the current flow through the transistor. TFETs could achieve a 100-fold power reduction over complementary CMOS transistors, so integrating TFETs with CMOS technology could improve low-power integrated circuits.

Recently, IBM has developed a novel method to integrate III-V nanowires and heterostructures directly on standard silicon substrates and built the first ever InAs/Si tunnel diodes and TFETs using InAs as source and Si as channel with wrap-around gate as steep slope device for low power consumption applications.

“In the next ten years computing hardware systems will be fundamentally different as our scientists and engineers push the limits of semiconductor innovations to explore the post-silicon future,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. “IBM Research and Development teams are creating breakthrough innovations that will fuel the next era of computing systems.”

IBM’s historic contributions to silicon and semiconductor innovation include the invention and/or first implementation of: the single cell DRAM, the “Dennard scaling laws” underpinning “Moore’s Law”, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed silicon germanium (SiGe), High-k gate dielectrics, embedded DRAM, 3D chip stacking, and Air gap insulators.

IBM researchers also are credited with initiating the era of nano devices following the Nobel prize winning invention of the scanning tunneling microscope which enabled nano and atomic scale invention and innovation.

IBM will also continue to fund and collaborate with university researchers to explore and develop the future technologies for the semiconductor industry. In particular, IBM will continue to support and fund university research through private-public partnerships such as the NanoElectornics Research Initiative (NRI), and the Semiconductor Advanced Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corporation.

I highlighted ‘memory systems’ as this brings to mind HP Labs and their major investment in ‘memristive’ technologies noted in my June 26, 2014 posting,

… During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg {Meg Whitman, CEO of HP Labs] turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

The Machine is based on the memristor and other associated technologies.

Getting back to IBM, there’s this analysis of the $3B investment ($600M/year for five years) by Alex Konrad in a July 10, 2014 article for Forbes (Note: A link has been removed),

When IBM … announced a $3 billion commitment to even tinier semiconductor chips that no longer depended on silicon on Wednesday, the big news was that IBM’s putting a lot of money into a future for chips where Moore’s Law no longer applies. But on second glance, the move to spend billions on more experimental ideas like silicon photonics and carbon nanotubes shows that IBM’s finally shifting large portions of its research budget into more ambitious and long-term ideas.

… IBM tells Forbes the $3 billion isn’t additional money being added to its R&D spend, an area where analysts have told Forbes they’d like to see more aggressive cash commitments in the future. IBM will still spend about $6 billion a year on R&D, 6% of revenue. Ten percent of that research budget, however, now has to come from somewhere else to fuel these more ambitious chip projects.

Neal Ungerleider’s July 11, 2014 article for Fast Company focuses on the neuromorphic computing and quantum computing aspects of this $3B initiative (Note: Links have been removed),

The new R&D initiatives fall into two categories: Developing nanotech components for silicon chips for big data and cloud systems, and experimentation with “post-silicon” microchips. This will include research into quantum computers which don’t know binary code, neurosynaptic computers which mimic the behavior of living brains, carbon nanotubes, graphene tools and a variety of other technologies.

IBM’s investment is one of the largest for quantum computing to date; the company is one of the biggest researchers in the field, along with a Canadian company named D-Wave which is partnering with Google and NASA to develop quantum computer systems.

The curious can find D-Wave Systems here. There’s also a January 19, 2012 posting here which discusses the D-Wave’s situation at that time.

Final observation, these are fascinating developments especially for the insight they provide into the worries troubling HP Labs, Intel, and IBM as they jockey for position.

ETA July 14, 2014: Dexter Johnson has a July 11, 2014 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]) about the IBM announcement and which features some responses he received from IBM officials to his queries,

While this may be a matter of fascinating speculation for investors, the impact on nanotechnology development  is going to be significant. To get a better sense of what it all means, I was able to talk to some of the key figures of IBM’s push in nanotechnology research.

I conducted e-mail interviews with Tze-Chiang (T.C.) Chen, vice president science & technology, IBM Fellow at the Thomas J. Watson Research Center and Wilfried Haensch, senior manager, physics and materials for logic and communications, IBM Research.

Silicon versus Nanomaterials

First, I wanted to get a sense for how long IBM envisioned sticking with silicon and when they expected the company would permanently make the move away from CMOS to alternative nanomaterials. Unfortunately, as expected, I didn’t get solid answers, except for them to say that new manufacturing tools and techniques need to be developed now.

He goes on to ask about carbon nanotubes and graphene. Interestingly, IBM does not have a wide range of electronics applications in mind for graphene.  I encourage you to read Dexter’s posting as Dexter got answers to some very astute and pointed questions.

Graphene, Perimeter Institute, and condensed matter physics

In short, researchers at Canada’s Perimeter Institute are working on theoretical models involving graphene. which could lead to quantum computing. A July 3, 2014 Perimeter Institute news release by Erin Bow (also on EurekAlert) provides some insight into the connections between graphene and condensed matter physics (Note: Bow has included some good basic explanations of graphene, quasiparticles, and more for beginners),

One of the hottest materials in condensed matter research today is graphene.

Graphene had an unlikely start: it began with researchers messing around with pencil marks on paper. Pencil “lead” is actually made of graphite, which is a soft crystal lattice made of nothing but carbon atoms. When pencils deposit that graphite on paper, the lattice is laid down in thin sheets. By pulling that lattice apart into thinner sheets – originally using Scotch tape – researchers discovered that they could make flakes of crystal just one atom thick.

The name for this atom-scale chicken wire is graphene. Those folks with the Scotch tape, Andre Geim and Konstantin Novoselov, won the 2010 Nobel Prize for discovering it. “As a material, it is completely new – not only the thinnest ever but also the strongest,” wrote the Nobel committee. “As a conductor of electricity, it performs as well as copper. As a conductor of heat, it outperforms all other known materials. It is almost completely transparent, yet so dense that not even helium, the smallest gas atom, can pass through it.”

Developing a theoretical model of graphene

Graphene is not just a practical wonder – it’s also a wonderland for theorists. Confined to the two-dimensional surface of the graphene, the electrons behave strangely. All kinds of new phenomena can be seen, and new ideas can be tested. Testing new ideas in graphene is exactly what Perimeter researchers Zlatko Papić and Dmitry (Dima) Abanin set out to do.

“Dima and I started working on graphene a very long time ago,” says Papić. “We first met in 2009 at a conference in Sweden. I was a grad student and Dima was in the first year of his postdoc, I think.”

The two young scientists got to talking about what new physics they might be able to observe in the strange new material when it is exposed to a strong magnetic field.

“We decided we wanted to model the material,” says Papić. They’ve been working on their theoretical model of graphene, on and off, ever since. The two are now both at Perimeter Institute, where Papić is a postdoctoral researcher and Abanin is a faculty member. They are both cross-appointed with the Institute for Quantum Computing (IQC) at the University of Waterloo.

In January 2014, they published a paper in Physical Review Letters presenting new ideas about how to induce a strange but interesting state in graphene – one where it appears as if particles inside it have a fraction of an electron’s charge.

It’s called the fractional quantum Hall effect (FQHE), and it’s head turning. Like the speed of light or Planck’s constant, the charge of the electron is a fixed point in the disorienting quantum universe.

Every system in the universe carries whole multiples of a single electron’s charge. When the FQHE was first discovered in the 1980s, condensed matter physicists quickly worked out that the fractionally charged “particles” inside their semiconductors were actually quasiparticles – that is, emergent collective behaviours of the system that imitate particles.

Graphene is an ideal material in which to study the FQHE. “Because it’s just one atom thick, you have direct access to the surface,” says Papić. “In semiconductors, where FQHE was first observed, the gas of electrons that create this effect are buried deep inside the material. They’re hard to access and manipulate. But with graphene you can imagine manipulating these states much more easily.”

In the January paper, Abanin and Papić reported novel types of FQHE states that could arise in bilayer graphene – that is, in two sheets of graphene laid one on top of another – when it is placed in a strong perpendicular magnetic field. In an earlier work from 2012, they argued that applying an electric field across the surface of bilayer graphene could offer a unique experimental knob to induce transitions between FQHE states. Combining the two effects, they argued, would be an ideal way to look at special FQHE states and the transitions between them.

Once the scientists developed their theory they went to work on some experiments,

Two experimental groups – one in Geneva, involving Abanin, and one at Columbia, involving both Abanin and Papić – have since put the electric field + magnetic field method to good use. The paper by the Columbia group appears in the July 4 issue of Science. A third group, led by Amir Yacoby of Harvard, is doing closely related work.

“We often work hand-in-hand with experimentalists,” says Papić. “One of the reasons I like condensed matter is that often even the most sophisticated, cutting-edge theory stands a good chance of being quickly checked with experiment.”

Inside both the magnetic and electric field, the electrical resistance of the graphene demonstrates the strange behaviour characteristic of the FQHE. Instead of resistance that varies in a smooth curve with voltage, resistance jumps suddenly from one level to another, and then plateaus – a kind of staircase of resistance. Each stair step is a different state of matter, defined by the complex quantum tangle of charges, spins, and other properties inside the graphene.

“The number of states is quite rich,” says Papić. “We’re very interested in bilayer graphene because of the number of states we are detecting and because we have these mechanisms – like tuning the electric field – to study how these states are interrelated, and what happens when the material changes from one state to another.”

For the moment, researchers are particularly interested in the stair steps whose “height” is described by a fraction with an even denominator. That’s because the quasiparticles in that state are expected to have an unusual property.

There are two kinds of particles in our three-dimensional world: fermions (such as electrons), where two identical particles can’t occupy one state, and bosons (such as photons), where two identical particles actually want to occupy one state. In three dimensions, fermions are fermions and bosons are bosons, and never the twain shall meet.

But a sheet of graphene doesn’t have three dimensions – it has two. It’s effectively a tiny two-dimensional universe, and in that universe, new phenomena can occur. For one thing, fermions and bosons can meet halfway – becoming anyons, which can be anywhere in between fermions and bosons. The quasiparticles in these special stair-step states are expected to be anyons.

In particular, the researchers are hoping these quasiparticles will be non-Abelian anyons, as their theory indicates they should be. That would be exciting because non-Abelian anyons can be used in the making of qubits.

Graphene qubits?

Qubits are to quantum computers what bits are to ordinary computers: both a basic unit of information and the basic piece of equipment that stores that information. Because of their quantum complexity, qubits are more powerful than ordinary bits and their power grows exponentially as more of them are added. A quantum computer of only a hundred qubits can tackle certain problems beyond the reach of even the best non-quantum supercomputers. Or, it could, if someone could find a way to build stable qubits.

The drive to make qubits is part of the reason why graphene is a hot research area in general, and why even-denominator FQHE states – with their special anyons – are sought after in particular.

“A state with some number of these anyons can be used to represent a qubit,” says Papić. “Our theory says they should be there and the experiments seem to bear that out – certainly the even-denominator FQHE states seem to be there, at least according to the Geneva experiments.”

That’s still a step away from experimental proof that those even-denominator stair-step states actually contain non-Abelian anyons. More work remains, but Papić is optimistic: “It might be easier to prove in graphene than it would be in semiconductors. Everything is happening right at the surface.”

It’s still early, but it looks as if bilayer graphene may be the magic material that allows this kind of qubit to be built. That would be a major mark on the unlikely line between pencil lead and quantum computers.

Here are links for further research,

January PRL paper mentioned above: http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.046602

Experimental paper from the Geneva graphene group, including Abanin: http://pubs.acs.org/doi/abs/10.1021/nl5003922

Experimental paper from the Columbia graphene group, including both Abanin and Papić: http://arxiv.org/abs/1403.2112. This paper is featured in the journal Science.

Related experiment on bilayer graphene by Amir Yacoby’s group at Harvard: http://www.sciencemag.org/content/early/2014/05/28/science.1250270

The Nobel Prize press release on graphene, mentioned above: http://www.nobelprize.org/nobel_prizes/physics/laureates/2010/press.html

I recently posted a piece about some research into the ‘scotch-tape technique’ for isolating graphene (June 30, 2014 posting). Amusingly, Geim argued against coining the technique as the ‘scotch-tape’ technique, something I found out only recently.

Surviving 39 minutes at room temperature—recordbreaking for quantum materials

There are two news releases about this work which brings quantum computing a step closer to reality. I’ll start with the Nov. 15, 2013 Simon Fraser University (SFU; located in Vancouver, Canada) news release (Note: A link has been removed),,

An international team of physicists led by Simon Fraser University professor Mike Thewalt has overcome a key barrier to building practical quantum computers, taking a significant step to bringing them into the mainstream.

In their record-breaking experiment conducted on SFU’s Burnaby campus, [part of Metro Vancouver] the scientists were able to get fragile quantum states to survive in a solid material at room temperature for 39 minutes. For the average person, it may not seem like a long time, but it’s a veritable eternity to a quantum physicist.

“This opens up the possibility of truly long-term coherent information storage at room temperature,” explains Thewalt.

Quantum computers promise to significantly outperform today’s machines at certain tasks, by exploiting the strange properties of subatomic particles. Conventional computers process data stored as strings of ones or zeroes, but quantum objects are not constrained to the either/or nature of binary bits.

Instead, each quantum bit – or qubit – can be put into a superposition of both one and zero at the same time, enabling them to perform multiple calculations simultaneously. For instance, this ability to multi-task could allow quantum computers to crack seemingly secure encryption codes.

“A powerful universal quantum computer would change technology in ways that we already understand, and doubtless in ways we do not yet envisage,” says Thewalt, whose research was published in Science today.

“It would have a huge impact on security, code breaking and the transmission and storage of secure information. It would be able to solve problems which are impossible to solve on any conceivable normal computer. It would be able to model the behaviour of quantum systems, a task beyond the reach of normal computers, leading, for example, to the development of new drugs by a deeper understanding of molecular interactions.”

However, the problem with attempts to build these extraordinary number-crunchers is that superposition states are delicate structures that can collapse like a soufflé if nudged by a stray particle, such as an air molecule.

To minimize this unwanted process, physicists often cool their qubit systems to almost absolute zero (-273 C) and manipulate them in a vacuum. But such setups are finicky to maintain and, ultimately, it would be advantageous for quantum computers to operate robustly at everyday temperatures and pressures.

“Our research extends the demonstrated coherence time in a solid at room temperature by a factor of 100 – and at liquid helium temperature by a factor of 60 (from three minutes to three hours),” says Thewalt.

“These are large, significant improvements in what is possible.”

The November 15, 2013 University of Oxford news release (also on EurekAlert), features their own researcher and more information (e.g., the previous record for maintaining coherence of a solid state at room temperature),

An international team including Stephanie Simmons of Oxford University report in this week’s Science a test performed as part of a project led by Mike Thewalt of Simon Fraser University, Canada, and colleagues. …

In the experiment, the team raised the temperature of a system, in which information is encoded in the nuclei of phosphorus atoms in silicon, from -269°C to 25°C and demonstrated that the superposition states survived at this balmy temperature for 39 minutes – outside of silicon the previous record for such a state’s survival at room temperature was around two seconds. [emphasis mine] The team even found that they could manipulate the qubits as the temperature of the system rose, and that they were robust enough for this information to survive being ‘refrozen’ (the optical technique used to read the qubits only works at very low temperatures).

‘Thirty-nine minutes may not seem very long but as it only takes one-hundred-thousandth of a second to flip the nuclear spin of a phosphorus ion – the type of operation used to run quantum calculations – in theory over two million operations could be applied in the time it takes for the superposition to naturally decay by 1%. Having such robust, as well as long-lived, qubits could prove very helpful for anyone trying to build a quantum computer,’ said Stephanie Simmons of Oxford University’s Department of Materials, an author of the paper.

The team began with a sliver of silicon doped with small amounts of other elements, including phosphorus. Quantum information was encoded in the nuclei of the phosphorus atoms: each nucleus has an intrinsic quantum property called ‘spin’, which acts like a tiny bar magnet when placed in a magnetic field. Spins can be manipulated to point up (0), down (1), or any angle in between, representing a superposition of the two other states.

The team prepared their sample at just 4°C above absolute zero (-269°C) and placed it in a magnetic field. Additional magnetic field pulses were used to tilt the direction of the nuclear spin and create the superposition states. When the sample was held at this cryogenic temperature, the nuclear spins of about 37% of the ions – a typical benchmark to measure quantum coherence – remained in their superposition state for three hours. The same fraction survived for 39 minutes when the temperature of the system was raised to 25°C.

There is still some work ahead before the team can carry out large-scale quantum computations. The nuclear spins of the 10 billion or so phosphorus ions used in this experiment were all placed in the same quantum state. To run calculations, however, physicists will need to place different qubits in different states. ‘To have them controllably talking to one another – that would address the last big remaining challenge,’ said Simmons.

Even for the uninitiated, going from a record of two seconds to 39 minutes has to raise an eyebrow.

Here’s a link to and a citation for the paper,

Room-Temperature Quantum Bit Storage Exceeding 39 Minutes Using Ionized Donors in Silicon-28.by Kamyar Saeedi, Stephanie Simmons, Jeff Z. Salvail, Phillip Dluhy, Helge Riemann, Nikolai V. Abrosimov, Peter Becker, Hans-Joachim Pohl, John J. L. Morton, & Mike L. W. Thewalt.  Science 15 November 2013: Vol. 342 no. 6160 pp. 830-833 DOI: 10.1126/science.1239584

This paper is behind a paywall.

ETA Nov. 18 ,2013:  The University College of London has also issued a Nov. 15, 2013 news release on EurekAlert about this work. While some of this is repetitive, I think there’s enough new information to make this excerpt worthwhile,

The team even found that they could manipulate the qubits as the temperature of the system rose, and that they were robust enough for this information to survive being ‘refrozen’ (the optical technique used to read the qubits only works at very low temperatures). 39 minutes may not sound particularly long, but since it only takes a tiny fraction of a second to run quantum computations by flipping the spin of phosphorus ions (electrically charged phosphorus atoms), many millions of operations could be carried out before a system like this decays.

“This opens up the possibility of truly long-term coherent information storage at room temperature,” said Mike Thewalt (Simon Fraser University), the lead researcher in this study.

The team began with a sliver of silicon doped with small amounts of other elements, including phosphorus. They then encoded quantum information in the nuclei of the phosphorus atoms: each nucleus has an intrinsic quantum property called ‘spin’, which acts like a tiny bar magnet when placed in a magnetic field. Spins can be manipulated to point up (0), down (1), or any angle in between, representing a superposition of the two other states.

The team prepared their sample at -269 °C, just 4 degrees above absolute zero, and placed it in a magnetic field. They used additional magnetic field pulses to tilt the direction of the nuclear spin and create the superposition states. When the sample was held at this cryogenic temperature, the nuclear spins of about 37 per cent of the ions – a typical benchmark to measure quantum coherence – remained in their superposition state for three hours. The same fraction survived for 39 minutes when the temperature of the system was raised to 25 °C.

 

Quantum teleportation from a Japan-Germany collaboration

An Aug. 15, 2013 Johannes Gutenberg University Mainz press release (also on EurekAlert) has somewhat gobsmacked me with its talk of teleportation,

By means of the quantum-mechanical entanglement of spatially separated light fields, researchers in Tokyo and Mainz have managed to teleport photonic qubits with extreme reliability. This means that a decisive breakthrough has been achieved some 15 years after the first experiments in the field of optical teleportation. The success of the experiment conducted in Tokyo is attributable to the use of a hybrid technique in which two conceptually different and previously incompatible approaches were combined. “Discrete digital optical quantum information can now be transmitted continuously – at the touch of a button, if you will,” explained Professor Peter van Loock of Johannes Gutenberg University Mainz (JGU). As a theoretical physicist, van Loock advised the experimental physicists in the research team headed by Professor Akira Furusawa of the University of Tokyo on how they could most efficiently perform the teleportation experiment to ultimately verify the success of quantum teleportation.

The press release goes on to describe quantum teleportation,

Quantum teleportation involves the transfer of arbitrary quantum states from a sender, dubbed Alice, to a spatially distant receiver, named Bob. This requires that Alice and Bob initially share an entangled quantum state across the space in question, e.g., in the form of entangled photons. Quantum teleportation is of fundamental importance to the processing of quantum information (quantum computing) and quantum communication. Photons are especially valued as ideal information carriers for quantum communication since they can be used to transmit signals at the speed of light. A photon can represent a quantum bit or qubit analogous to a binary digit (bit) in standard classical information processing. Such photons are known as ‘flying quantum bits.

Before explaining the new technique, there’s an overview of previous efforts,

The first attempts to teleport single photons or light particles were made by the Austrian physicist Anton Zeilinger. Various other related experiments have been performed in the meantime. However, teleportation of photonic quantum bits using conventional methods proved to have its limitations because of experimental deficiencies and difficulties with fundamental principles.

What makes the experiment in Tokyo so different is the use of a hybrid technique. With its help, a completely deterministic and highly reliable quantum teleportation of photonic qubits has been achieved. The accuracy of the transfer was 79 to 82 percent for four different qubits. In addition, the qubits were teleported much more efficiently than in previous experiments, even at a low degree of entanglement.

The concept of entanglement was first formulated by Erwin Schrödinger and involves a situation in which two quantum systems, such as two light particles for example, are in a joint state, so that their behavior is mutually dependent to a greater extent than is normally (classically) possible. In the Tokyo experiment, continuous entanglement was achieved by means of entangling many photons with many other photons. This meant that the complete amplitudes and phases of two light fields were quantum correlated. Previous experiments only had a single photon entangled with another single photon – a less efficient solution. “The entanglement of photons functioned very well in the Tokyo experiment – practically at the press of a button, as soon as the laser was switched on,” said van Loock, Professor for Theory of Quantum Optics and Quantum Information at Mainz University. This continuous entanglement was accomplished with the aid of so-called ‘squeezed light’, which takes the form of an ellipse in the phase space of the light field. Once entanglement has been achieved, a third light field can be attached to the transmitter. From there, in principle, any state and any number of states can be transmitted to the receiver. “In our experiment, there were precisely four sufficiently representative test states that were transferred from Alice to Bob using entanglement. Thanks to continuous entanglement, it was possible to transmit the photonic qubits in a deterministic fashion to Bob, in other words, in each run,” added van Loock.

Earlier attempts to achieve optical teleportation were performed differently and, before now, the concepts used have proved to be incompatible. Although in theory it had already been assumed that the two different strategies, from the discrete and the continuous world, needed to be combined, it represents a technological breakthrough that this has actually now been experimentally demonstrated with the help of the hybrid technique. “The two separate worlds, the discrete and the continuous, are starting to converge,” concluded van Loock.

The researchers have provided an image illustrating quantum teleportation,

Deterministic quantum teleportation of a photonic quantum bit. Each qubit that flies from the left into the teleporter leaves the teleporter on the right with a loss of quality of only around 20 percent, a value not achievable without entanglement. Courtesy University of Tokyo

Deterministic quantum teleportation of a photonic quantum bit. Each qubit that flies from the left into the teleporter leaves the teleporter on the right with a loss of quality of only around 20 percent, a value not achievable without entanglement. Courtesy University of Tokyo

Here’s a citation for and a link to the published paper,

Deterministic quantum teleportation of photonic quantum bits by a hybrid technique by Shuntaro Takeda, Takahiro Mizuta, Maria Fuwa, Peter van Loock & Akira Furusawa. Nature 500, 315–318 (15 August 2013) doi:10.1038/nature12366 Published online 14 August 2013

This article  is behind a paywall although there is a preview capability (ReadCube Access) available.

Connecting the dots in quantum computing—the secret is in the spins

The Feb. 26, 2013 University of Pittsburgh news release puts it a lot better than I can,

Recent research offers a new spin on using nanoscale semiconductor structures to build faster computers and electronics. Literally.

University of Pittsburgh and Delft University of Technology researchers reveal in the Feb. 17 [2013]online issue of Nature Nanotechnology a new method that better preserves the units necessary to power lightning-fast electronics, known as qubits (pronounced CUE-bits). Hole spins, rather than electron spins, can keep quantum bits in the same physical state up to 10 times longer than before, the report finds.

“Previously, our group and others have used electron spins, but the problem was that they interacted with spins of nuclei, and therefore it was difficult to preserve the alignment and control of electron spins,” said Sergey Frolov, assistant professor in the Department of Physics and Astronomy within Pitt’s Kenneth P. Dietrich School of Arts and Sciences, who did the work as a postdoctoral fellow at Delft University of Technology in the Netherlands.

Whereas normal computing bits hold mathematical values of zero or one, quantum bits live in a hazy superposition of both states. It is this quality, said Frolov, which allows them to perform multiple calculations at once, offering exponential speed over classical computers. However, maintaining the qubit’s state long enough to perform computation remains a long-standing challenge for physicists.

“To create a viable quantum computer, the demonstration of long-lived quantum bits, or qubits, is necessary,” said Frolov. “With our work, we have gotten one step closer.”

Thankfully, an explanation of the hole spins vs. electron spins issue follows,

The holes within hole spins, Frolov explained, are literally empty spaces left when electrons are taken out. Using extremely thin filaments called InSb (indium antimonide) nanowires, the researchers created a transistor-like device that could transform the electrons into holes. They then precisely placed one hole in a nanoscale box called “a quantum dot” and controlled the spin of that hole using electric fields. This approach- featuring nanoscale size and a higher density of devices on an electronic chip-is far more advantageous than magnetic control, which has been typically employed until now, said Frolov.

“Our research shows that holes, or empty spaces, can make better spin qubits than electrons for future quantum computers.”

“Spins are the smallest magnets in our universe. Our vision for a quantum computer is to connect thousands of spins, and now we know how to control a single spin,” said Frolov. “In the future, we’d like to scale up this concept to include multiple qubits.”

This graphic displays spin qubits within a nanowire. [downloaded from http://www.news.pitt.edu/connecting-quantum-dots]

This graphic displays spin qubits within a nanowire. [downloaded from http://www.news.pitt.edu/connecting-quantum-dots]

From the news release,

Coauthors of the paper include Leo Kouwenhoven, Stevan Nadj-Perge, Vlad Pribiag, Johan van den Berg, and Ilse van Weperen of Delft University of Technology; and Sebastien Plissard and Erik Bakkers from Eindhoven University of Technology in the Netherlands.

The paper, “Electrical control over single hole spins in nanowire quantum dots,” appeared online Feb. 17 in Nature Nanotechnology. The research was supported by the Dutch Organization for Fundamental Research on Matter, the Netherlands Organization for Scientific Research, and the European Research Council.

According to the scientists we’re going to be waiting a bit longer for a quantum computer but this work is promising. Their paper is behind a paywall.

Unforgeable credit card and qubits?

The headline for the news item on Nanowerk and for the originating news release says ‘unforgeable’ but the researchers are being a little more cautious as I’m also seeing the words ‘almost impossible’ and ‘high probability’ in the text.

First, here’s a bit more about the researchers and their paper in an Oct. 2, 2012 news item on Nanowerk,

A team of physicists at Max-Planck-Institute of Quantum Optics (Garching [MPQ]), Harvard University (Cambridge, USA), and California Institute of Technology (Pasadena, USA) has demonstrated that such [noise-tolerant] protocols can be made tolerant to noise while ensuring rigorous security at the same time (“Unforgeable noise-tolerant quantum tokens”).

The researchers seem to be relying on a principle that perhaps we could call ‘imperfection’. The Max Planck Institute of Quantum Optics’ Oct. 2, 2012 news release, which originated the news item, provides some context,

Whoever has paid a hotel bill by credit card knows about the pending danger: given away the numbers of the card, the bank account and so on, an adversary might be able to forge a duplicate, take all the money from the account and ruin the person. On the other hand, as first acknowledged by Stephen Wiesner in 1983, nature provides ways to prevent forging: it is, for example, impossible to clone quantum information which is stored on a qubit. So why not use these features for the safe verification of quantum money? While the digits printed on a credit card are quite robust to the usual wear and tear of normal use in a wallet, its quantum information counterparts are generally quite challenged by noise, decoherence and operational imperfections. Therefore it is necessary to lower the requirements on the authentication process. A team of physicists at Max-Planck-Institute of Quantum Optics (Garching), Harvard University (Cambridge, USA), and California Institute of Technology (Pasadena, USA) has demonstrated that such protocols can be made tolerant to noise while ensuring rigorous security at the same time (Proceedings of the National Academy of Science (PNAS), 18 September, 2012 [article behind a paywall]).

The researchers illustrated their news release with this image,

Figure: Illustration of a quantum bill (IN QUANTUM PHYSICS WE TRUST)
© background by vektorportal.com, collage by F. Pastwaski

I have worked as a technical writer for telecommunications companies and in fact started with a data communications company that specialized in software for the financial services sector. Consequently, I feel reasonably comfortable about presenting this very brief overview of what happens when you (a legitimate user) put your credit/charge card or your bank/direct pay card (e.g. Interac) into a reader at a store or bank as a little background information before you read more about the ‘quantum credit card’.

  • All cards have bits of information on the magnetic strip which identify you and your financial institutions, e.g. your name, MasterCard, (issued by) Bank of Montreal
  • That data along with whatever amount you wish to charge or withdraw from your bank account is relayed from the reader through various pieces of hardware and software both to and from your financial institutions.
  • The hardware and software used in the transaction all operate according to protocols (rules for handling data). Difference pieces of hardware can and often do  have different protocols as do the different pieces of software.
    • For example, if your cards and institutions are based in Mexico and you’re in India trying to charge a purchase, your data is being sent through the network set up by the various financial institutions (hardware and software) in India then eventually bounced to Mexico (it may not be direct) via satellite and sent through the networks in Mexico onto your institutions (hardware and software) and then back again. That’s a lot  of hardware and software and while some of it may operate according to the same protocols, it’s reasonable to assume there’ will be a lot of changes and imperfections will creep in and this is the source of at least some of what the engineers call ‘noise’.

What I’ve just described (as accurately as I can recall) is the process for a legitimate user. These researchers are trying to find a means of foiling illegitimate users, which shifts the focus. Now, if I understand the information in the news release properly, the researchers have devised and tested two protocols for their unforgeable credit card (from the news release),

In both approaches, the bank issues a token and sends it to the holder. The “identity” of the token can be encoded on photons transmitted via an optical fibre or on nuclear spins in a solid memory transferred to the holder. However, only the bank stores a full classical description of these quantum states.

In the approach denoted by “quantum ticket”, the holder has to return the token to the bank or another trusted verifier for validation. The verifier is willing to tolerate a certain fraction of errors which should be enough to accommodate the imperfections associated with encoding, storage and decoding of individual quantum bits. The only information returned to the holder is whether the ticket has been accepted or rejected. Thus it is “consumed” and no longer available to the holder. The scientists show that through such an approach, both the likelihood of rejecting the token from an honest user and that of accepting a counterfeit can be made negligible.

The second approach is the “classical verification quantum ticket”. In some cases it may be impossible that the quantum tickets are given back to the bank physically. Here the holder has to validate his quantum token remotely – by answering challenge questions. The group considers a scheme where the quantum information is organized in blocks of qubit pairs. A non-revealing challenge question consists of requesting the holder to use a specific measurement basis for each block. By doing so, the holder is capable of providing a correct answer, but the token is consumed. This excludes the possibility for a dishonest user to cheat by answering complementary questions. As before, the given tolerance threshold determines the number of correct answers that is necessary for the verification of the token. The block structure used for the tokens allows exponentially suppressing the undesired capability of a dishonest holder to answer two complementary questions while assuring a true holder’s token will be authenticated with a very high probability.

For both protocols a realistic noise tolerance can be achieved.  “We can deduce from theory that on average no more than 83% of the secret digits may be duplicated correctly by a counterfeiter. Under realistic conditions, we can assume that an honest participant should be able to recover 95% of the digits. If now the verifier sets the tolerance level to 90%, it will be almost impossible [emphasis mine] to accept fraudulent tokens or to reject an authentic holder,” Dr. Pastawski [Dr. Fernando Pastawski (MPQ)] explains.

I think they’re proposing two different approaches rather than the simultaneous use of two different protocols.

I’ve highlighted ‘almost impossible’ in the text of the news release as it is not the same thing as ‘impossible’ which is implied by the word ‘unforgeable’. It’s been my observation that whenever crime fighter types think they’ve devised a criminal-proof solution, criminals make a point of subverting the new technology.  In any event, we’re a long way from seeing these ‘unforgeable’ credit cards, from the news release,

“I expect to live to see such applications become commercially available. However quantum memory technology still needs to mature for such protocols to become viable,” the scientist [Pastawski] adds.