Tag Archives: qubits

Using measurements to generate quantum entanglement and teleportation

Caption: The researchers at Google Quantum AI and Stanford University explored how measurements can fundamentally change the structure of quantum information in space-time. Credit: Google Quantum AI, designed by Sayo-Art

interesting approach to illustrating a complex scientific concept! This October 18, 2023 news item on phys.org describes the measurement problem,

Quantum mechanics is full of weird phenomena, but perhaps none as weird as the role measurement plays in the theory. Since a measurement tends to destroy the “quantumness” of a system, it seems to be the mysterious link between the quantum and classical world. And in a large system of quantum bits of information, known as “qubits,” the effect of measurements can induce dramatically new behavior, even driving the emergence of entirely new phases of quantum information.

This happens when two competing effects come to a head: interactions and measurement. In a quantum system, when the qubits interact with one another, their information becomes shared nonlocally in an “entangled state.” But if you measure the system, the entanglement is destroyed. The battle between measurement and interactions leads to two distinct phases: one where interactions dominate and entanglement is widespread, and one where measurements dominate, and entanglement is suppressed.

An October 18, 2023 Google Quantum AI news release, which originated the news item, on EurekAlert provides more information about a research collaboration between Google and Stanford University,

As reported today [October 18, 2023] in the journal Nature, researchers at Google Quantum AI and Stanford University have observed the crossover between these two regimes — known as a “measurement-induced phase transition” — in a system of up to 70 qubits. This is by far the largest system in which measurement-induced effects have been explored. The researchers also saw signatures of a novel form of “quantum teleportation” — in which an unknown quantum state is transferred from one set of qubits to another — that emerges as a result of these measurements. These studies could help inspire new techniques useful for quantum computing.

One can visualize the entanglement in a system of qubits as an intricate web of connections. When we measure an entangled system, the impact it has on the web depends on the strength of the measurement. It could destroy the web completely, or it could snip and prune selected strands of the web, but leave others intact. 

To actually see this web of entanglement in an experiment is notoriously challenging. The web itself is invisible, so researchers can only infer its existence by seeing statistical correlations between the measurement outcomes of qubits. Many, many runs of the same experiment are needed to infer the pattern of the web. This and other challenges have plagued past experiments and limited the study of measurement-induced phase transitions to very small system sizes. 

To address these challenges, the researchers used a few experimental sleights of hand. First, they rearranged the order of operations so that all the measurements could be made at the end of the experiment, rather than interleaved throughout, thus reducing the complexity of the experiment. Second, they developed a new way to measure certain features of the web with a single “probe” qubit. In this way, they could learn more about the entanglement web from fewer runs of the experiment than had been previously required. Finally, the probe, like all qubits, was susceptible to unwanted noise in the environment. This is normally seen as a bad thing, as noise can disrupt quantum calculations, but the researchers turned this bug into a feature by noting that the probe’s sensitivity to noise depended on the nature of the entanglement web around it. They could therefore use the probe’s noise sensitivity to infer the entanglement of the whole system.

The team first looked at this difference in sensitivity to noise in the two entanglement regimes and found distinctly different behaviors. When measurements dominated over interactions (the “disentangling phase”), the strands of the web remained relatively short. The probe qubit was only sensitive to the noise of its nearest qubits. In contrast, when the measurements were weaker and entanglement was more widespread (the “entangling phase”) the probe was sensitive to noise throughout the entire system. The crossover between these two sharply contrasting behaviors is a signature of the sought-after measurement-induced phase transition.

The team also demonstrated a novel form of quantum teleportation that emerged naturally from the measurements: by measuring all but two distant qubits in a weakly entangled state, stronger entanglement was generated between those two distant qubits. The ability to generate measurement-induced entanglement across long distances enables the teleportation observed in the experiment.

The stability of entanglement against measurements in the entangling phase could inspire new schemes to make quantum computing more robust to noise. The role that measurements play in driving new phases and physical phenomena is also of fundamental interest to physicists. Stanford professor and co-author of the study, Vedika Khemani, says, “Incorporating measurements into dynamics introduces a whole new playground for many-body physics where many fascinating and new types of non-equilibrium phases could be found. We explore a few of these striking and counter-intuitive measurement induced phenomena in this work, but there is much more richness to be discovered in the future.” 

Before getting to the citation for and link to the paper, I have an interview with some of the researchers that was written up by Holly Alyssa MacCormick (Associate Director of Public Relations. Science writer and news editor for Stanford School of Humanities and Sciences) in an October 18, 2023 article for Stanford University, Note 1: Some of this will be redundant; Note 2: Links have been removed,

Harnessing the “weirdness” of quantum mechanics to solve practical problems is the long-standing promise of quantum computing. But much like the state of the cat in Erwin Schrödinger’s famous thought experiment, quantum mechanics is still a box of unknowns. Similar to the solid, liquid, and gas phases of matter, the organization of quantum information, too, can assume different phases. Yet unlike the phases of matter we are familiar with in everyday life, the phases of quantum information are much harder to formulate and observe and as a result have been only a theoretical dream until recently.

Measurements are arguably the weirdest facet of quantum mechanics. Intuition tells us that a state has some definite property and measurement reveals that property. However, measurements in quantum mechanics produce intrinsically random results, and the act of measurement irreversibly changes the state itself. Unlike laptops, smartphones, and other classical computers that rely on binary “bits” to code in the state of 0 (off) or 1 (on), quantum computers use “qubits” of information that can be in the state of 0, 1, or 0 and 1 at the same time, a concept known as superposition. The act of measurement doesn’t just extract information, but also changes the state, randomly “collapsing” a superposition into a specific value (0 or 1).

Moreover, this collapse affects not just the qubit that was measured, but also potentially the entire system—an effect described by Einstein as “spooky action at a distance.” This is due to “entanglement,” a quantum property that allows multiple particles in different places to jointly be in superposition, which is a key ingredient for quantum computing. The collapse of an entangled state can also enable spooky phenomena such as “teleportation,” thereby irretrievably altering the “arrow of time” (the concept that time moves in one forward direction) that governs our everyday experience.

In other words, measurements can be used to fundamentally reorganize the structure of quantum information in space and time.

Now, a new collaboration between Stanford and Google Quantum AI investigates the effect of measurements on quantum systems of many particles on Google’s quantum computer and has obtained the largest experimental demonstration of novel measurement-induced phases of quantum information to date. The study was co-led by Jesse Hoke, a physics graduate student and fellow at Stanford’s Quantum Science and Engineering initiative (Q-FARM), Matteo Ippoliti, a former postdoctoral scholar in the Department of Physics, and senior author Vedika Khemani, associate professor of physics at the Stanford School of Humanities and Sciences and Q-FARM. Their results were published Oct. 18 in the journal Nature.

Here, Hoke, Ippoliti, and Khemani discuss how they observed measurement-induced phases of quantum information—a feat once thought to be beyond the realm of what could be achieved in an experiment—and how their new insights could help pave the way for advancements in quantum science and engineering.

Question: What distinguishes the phases investigated in this study from one another, and what is teleportation?

Ippoliti: In the simplest case, there are two phases. In one phase, the structure of quantum information in the system forms a strongly connected web where qubits share a lot of entanglement, even at large spatial distances and/or temporal separations. In the other, the system is weakly connected, so correlations like entanglement decay quickly with distance or time. These are the two phases that we probed in our experiment. The strongly entangled phase enables teleportation, which occurs when the state of one qubit is instantly transmitted, or “teleported,” to another far away qubit by measuring all but those two qubits.

Question: How did you control when a phase transition occurred

Khemani: The competing forces at play are the interactions between qubits, which tend to build entanglement, and measurements of the qubits, which can destroy it. This is the famous “wave function collapse” of quantum mechanics—think of Schrödinger’s cat “collapsing” into one of two states (dead or alive) when we open the box. However, because of entanglement, the collapse is not restricted to the qubit we directly measure but affects the rest of the system too. By controlling the strength or frequency of measurements on the quantum computer, we can induce a phase transition between an entangled phase and a disentangled one.

Question: What were some of the challenges your team needed to overcome to measure quantum states, and how did you do it?

Ippoliti: Measurements in quantum mechanics are inherently random, which makes observing these phases notoriously challenging. This is because every repetition of our experiment produces a different, random-looking quantum state. This is a problem because detecting entanglement (the feature that sets our two phases apart) requires observations on many copies of the same state. To get around this difficulty, we developed a diagnostic that cross-correlates data from the quantum processor with the results of simulations on classical computers. This hybrid quantum-classical diagnostic allowed us to see evidence of the different phases on up to 70 qubits, making this one of the largest digital quantum simulations and experiments to date.

Hoke: Another challenge was that quantum experiments are currently limited by environmental noise. Entanglement is a delicate resource that is easily destroyed by interactions from the outside environment, which is the primary challenge in quantum computing. In our setup, we probe the entanglement structure between the system’s qubits, which is destroyed if the system is not perfectly isolated and instead gets entangled with the surrounding environment. We addressed this challenge by devising a diagnostic that uses noise as a feature rather than a bug—the two phases (weak and strong entanglement) respond to noise in different ways, and we used this as a probe of the phases.

Khemani: In addition, we used the fact that the “arrow of time” loses meaning with measurement-induced teleportation. This allowed us to reorganize the sequence of operations on the quantum computer in advantageous ways to mitigate the effects of noise and to devise new probes of the organization of quantum information in space-time.

Question: What do the findings mean?

Khemani: At the level of fundamental science, our experiments demonstrate new phenomena that extend our familiar concepts of “phase structure.” Instead of thinking of measurements merely as probes, we are now thinking of them as an intrinsic part of quantum dynamics, which can be used to create and manipulate novel quantum correlations. At the level of applications, using measurements to robustly generate structured entanglement is inspiring new ways to make quantum computing more robust against noise. More generally, our understanding of general phases of quantum information and dynamics is still nascent, and many exciting surprises await.

Acknowledgements

Hoke conducted research on this study while working as an intern at Google Quantum AI under the supervision of Xiao Mi and Pedram Roushan. Ippoliti is now an assistant professor of physics at the University of Texas at Austin. Additional co-authors on this study include the Google Quantum AI team and researchers from the University of Massachusetts, Amherst; Auburn University; University of Technology, Sydney; University of California, Riverside; and Columbia University. The full list of authors is available in the Nature paper.

Ippoliti was funded in part by the Gordon and Betty Moore Foundation’s EPiQS Initiative. Khemani was funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences; the Alfred P. Sloan Foundation; and the Packard Foundation.

Here’s a link to and a citation for the paper, Note: There are well over 100 contributors to the paper and I have not listed each one separately, You can find the list if you go to the Nature paper and click on Google Quantum AI and Collaborators in the author field,

Measurement-induced entanglement and teleportation on a noisy quantum processor by Google Quantum AI and Collaborators. Nature volume 622, pages 481–486 (2023) DOI: https://doi.org/10.1038/s41586-023-06505-7 Published online: 18 October 2023 Issue Date: 19 October 2023

This paper is open access.

A jellybean solution to a problem with quantum computing chips

A May 11, 2023 news item on phys.org heralds this new development, Note: A link has been removed,

The silicon microchips of future quantum computers will be packed with millions, if not billions of qubits—the basic units of quantum information—to solve the greatest problems facing humanity. And with millions of qubits needing millions of wires in the microchip circuitry, it was always going to get cramped in there.

But now engineers at UNSW [University of New South Wales] Sydney have made an important step toward solving a long-standing problem about giving their qubits more breathing space—and it all revolves around jellybeans.

Not the kind we rely on for a sugar hit to get us past the 3pm slump. But jellybean quantum dots –elongated areas between qubit pairs that create more space for wiring without interrupting the way the paired qubits interact with each other.

A May 10, 2023 University of New South Wales (UNSW) press release (also published on EurekAlert), which originated the news item, delves further into the ‘jellbean solution’, Note: A link has been removed,

As lead author Associate Professor Arne Laucht explains, the jellybean quantum dot is not a new concept in quantum computing, and has been discussed as a solution to some of the many pathways towards building the world’s first working quantum computer.

“It has been shown in different material systems such as gallium arsenide. But it has not been shown in silicon before,” he says.

Silicon is arguably one of the most important materials in quantum computing, A/Prof. Laucht says, as the infrastructure to produce future quantum computing chips is already available, given we use silicon chips in classical computers. Another benefit is that you can fit so many qubits (in the form of electrons) on the one chip.

“But because the qubits need to be so close together to share information with one another, placing wires between each pair was always going to be a challenge.”

In a study published today in Advanced Materials, the UNSW team of engineers describe how they showed in the lab that jellybean quantum dots were possible in silicon. This now opens the way for qubits to be spaced apart to ensure that the wires necessary to connect and control the qubits can be fit in between.

How it works

In a normal quantum dot using spin qubits, single electrons are pulled from a pool of electrons in silicon to sit under a ‘quantum gate’ – where the spin of each electron represents the computational state. For example, spin up may represent a 0 and spin down could represent a 1. Each qubit can then be controlled by an oscillating magnetic field of microwave frequency.

But to implement a quantum algorithm, we also need two-qubit gates, where the control of one qubit is conditional on the state of the other. For this to work, both quantum dots need to be placed very closely, just a few 10s of nanometres apart so their spins can interact with one another. (To put this in perspective, a single human hair is about 100,000 nanometres thick.)

But moving them further apart to create more real estate for wiring has always been the challenge facing scientists and engineers. The problem was as the paired qubits move apart, they would then stop interacting.

The jellybean solution represents a way of having both: nicely spaced qubits that continue to influence one another. To make the jellybean, the engineers found a way to create a chain of electrons by trapping more electrons in between the qubits. This acts as the quantum version of a string phone so that the two paired qubit electrons at each end of the jellybean can continue to talk to another. Only the electrons at each end are involved in any computations, while the electrons in the jellybean dot are there to keep them interacting while spread apart.

The lead author of the paper, former PhD student Zeheng Wang says the number of extra electrons pulled into the jellybean quantum dot is key to how they arrange themselves.

“We showed in the paper that if you only load a few electrons in that puddle of electrons that you have underneath, they break into smaller puddles. So it’s not one continuous jellybean quantum dot, it’s a smaller one here, and a bigger one in the middle and a smaller one there. We’re talking of a total of three to maybe ten electrons.

“It’s only when you go to larger numbers of electrons, say 15 or 20 electrons, that the jellybean becomes more continuous and homogeneous. And that’s where you have your well-defined spin and quantum states that you can use to couple qubits to another.”

Post-jellybean quantum world

A/Prof. Laucht stresses that there is still much work to be done. The team’s efforts for this paper focused on proving the jellybean quantum dot is possible. The next step is to insert working qubits at each end of the jellybean quantum dot and make them talk to another.

“It is great to see this work realised. It boosts our confidence that jellybean couplers can be utilised in silicon quantum computers, and we are excited to try implementing them with qubits next.”

Whoever wrote the press release seems to have had a lot of fun with the jellybeans. Thank you.

Here’s a link to and a citation for the paper,

Jellybean Quantum Dots in Silicon for Qubit Coupling and On-Chip Quantum Chemistry by Zeheng Wang, MengKe Feng, Santiago Serrano, William Gilbert, Ross C. C. Leon, Tuomo Tanttu, Philip Mai, Dylan Liang, Jonathan Y. Huang, Yue Su, Wee Han Lim, Fay E. Hudson,  Christopher C. Escott, Andrea Morello, Chih Hwan Yang, Andrew S. Dzurak, Andre Saraiva, Arne Laucht. Advanced Materials Volume 35, Issue 19 May 11, 2023 2208557 DOI: https://doi.org/10.1002/adma.202208557 First published online: 20 February 2023

This paper is open access.

Graphene can be used in quantum components

A November 3, 2022 news item on phys.org provides a brief history of graphene before announcing the latest work from ETH Zurich,

Less than 20 years ago, Konstantin Novoselov and Andre Geim first created two-dimensional crystals consisting of just one layer of carbon atoms. Known as graphene, this material has had quite a career since then.

Due to its exceptional strength, graphene is used today to reinforce products such as tennis rackets, car tires or aircraft wings. But it is also an interesting subject for fundamental research, as physicists keep discovering new, astonishing phenomena that have not been observed in other materials.

The right twist

Bilayer graphene crystals, in which the two atomic layers are slightly rotated relative to each other, are particularly interesting for researchers. About one year ago, a team of researchers led by Klaus Ensslin and Thomas Ihn at ETH Zurich’s Laboratory for Solid State Physics was able to demonstrate that twisted graphene could be used to create Josephson junctions, the fundamental building blocks of superconducting devices.

Based on this work, researchers were now able to produce the first superconducting quantum interference device, or SQUID, from twisted graphene for the purpose of demonstrating the interference of superconducting quasiparticles. Conventional SQUIDs are already being used, for instance in medicine, geology and archaeology. Their sensitive sensors are capable of measuring even the smallest changes in magnetic fields. However, SQUIDs work only in conjunction with superconducting materials, so they require cooling with liquid helium or nitrogen when in operation.

In quantum technology, SQUIDs can host quantum bits (qubits); that is, as elements for carrying out quantum operations. “SQUIDs are to superconductivity what transistors are to semiconductor technology—the fundamental building blocks for more complex circuits,” Ensslin explains.

A November 3, 2022 ETH Zurich news release by Felix Würsten, which originated the news item, delves further into the work,

The spectrum is widening

The graphene SQUIDs created by doctoral student Elías Portolés are not more sensitive than their conventional counterparts made from aluminium and also have to be cooled down to temperatures lower than 2 degrees above absolute zero. “So it’s not a breakthrough for SQUID technology as such,” Ensslin says. However, it does broaden graphene’s application spectrum significantly. “Five years ago, we were already able to show that graphene could be used to build single-electron transistors. Now we’ve added superconductivity,” Ensslin says.

What is remarkable is that the graphene’s behaviour can be controlled in a targeted manner by biasing an electrode. Depending on the voltage applied, the material can be insulating, conducting or superconducting. “The rich spectrum of opportunities offered by solid-state physics is at our disposal,” Ensslin says.

Also interesting is that the two fundamental building blocks of a semiconductor (transistor) and a superconductor (SQUID) can now be combined in a single material. This makes it possible to build novel control operations. “Normally, the transistor is made from silicon and the SQUID from aluminium,” Ensslin says. “These are different materials requiring different processing technologies.”

An extremely challenging production process

Superconductivity in graphene was discovered by an MIT [Massachusetts Institute of Technology] research group five years ago, yet there are only a dozen or so experimental groups worldwide that look at this phenomenon. Even fewer are capable of converting superconducting graphene into a functioning component.

The challenge is that scientists have to carry out several delicate work steps one after the other: First, they have to align the graphene sheets at the exact right angle relative to each other. The next steps then include connecting electrodes and etching holes. If the graphene were to be heated up, as happens often during cleanroom processing, the two layers re-align the twist angle vanishes. “The entire standard semiconductor technology has to be readjusted, making this an extremely challenging job,” Portolés says.

The vision of hybrid systems

Ensslin is thinking one step ahead. Quite a variety of different qubit technologies are currently being assessed, each with its own advantages and disadvantages. For the most part, this is being done by various research groups within the National Center of Competence in Quantum Science and Technology (QSIT). If scientists succeed in coupling two of these systems using graphene, it might be possible to combine their benefits as well. “The result would be two different quantum systems on the same crystal,” Ensslin says.

This would also generate new possibilities for research on superconductivity. “With these components, we might be better able to understand how superconductivity in graphene comes about in the first place,” he adds. “All we know today is that there are different phases of superconductivity in this material, but we do not yet have a theoretical model to explain them.”

Here’s a link to and a citation for the paper,

A tunable monolithic SQUID in twisted bilayer graphene by Elías Portolés, Shuichi Iwakiri, Giulia Zheng, Peter Rickhaus, Takashi Taniguchi, Kenji Watanabe, Thomas Ihn, Klaus Ensslin & Folkert K. de Vries. Nature Nanotechnology volume 17, pages 1159–1164 (2022) Issue Date: November 2022 DOI: https://doi.org/10.1038/s41565-022-01222-0 Published online: 24 October 2022

This paper is behind a paywall.

Exotic magnetism: a quantum simulation from D-Wave Sytems

Vancouver (Canada) area company, D-Wave Systems is trumpeting itself (with good reason) again. This 2021 ‘milestone’ achievement builds on work from 2018 (see my August 23, 2018 posting for the earlier work). For me, the big excitement was finding the best explanation for quantum annealing and D-Wave’s quantum computers that I’ve seen yet (that explanation and a link to more is at the end of this posting).

A February 18, 2021 news item on phys.org announces the latest achievement,

D-Wave Systems Inc. today [February 18, 2021] published a milestone study in collaboration with scientists at Google, demonstrating a computational performance advantage, increasing with both simulation size and problem hardness, to over 3 million times that of corresponding classical methods. Notably, this work was achieved on a practical application with real-world implications, simulating the topological phenomena behind the 2016 Nobel Prize in Physics. This performance advantage, exhibited in a complex quantum simulation of materials, is a meaningful step in the journey toward applications advantage in quantum computing.

A February 18, 2021 D-Wave Systems press release (also on EurekAlert), which originated the news item, describes the work in more detail,

The work by scientists at D-Wave and Google also demonstrates that quantum effects can be harnessed to provide a computational advantage in D-Wave processors, at problem scale that requires thousands of qubits. Recent experiments performed on multiple D-Wave processors represent by far the largest quantum simulations carried out by existing quantum computers to date.

The paper, entitled “Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets”, was published in the journal Nature Communications (DOI 10.1038/s41467-021-20901-5, February 18, 2021). D-Wave researchers programmed the D-Wave 2000Q™ system to model a two-dimensional frustrated quantum magnet using artificial spins. The behavior of the magnet was described by the Nobel-prize winning work of theoretical physicists Vadim Berezinskii, J. Michael Kosterlitz and David Thouless. They predicted a new state of matter in the 1970s characterized by nontrivial topological properties. This new research is a continuation of previous breakthrough work published by D-Wave’s team in a 2018 Nature paper entitled “Observation of topological phenomena in a programmable lattice of 1,800 qubits” (Vol. 560, Issue 7719, August 22, 2018). In this latest paper, researchers from D-Wave, alongside contributors from Google, utilize D-Wave’s lower noise processor to achieve superior performance and glean insights into the dynamics of the processor never observed before.

“This work is the clearest evidence yet that quantum effects provide a computational advantage in D-Wave processors,” said Dr. Andrew King, principal investigator for this work at D-Wave. “Tying the magnet up into a topological knot and watching it escape has given us the first detailed look at dynamics that are normally too fast to observe. What we see is a huge benefit in absolute terms, with the scaling advantage in temperature and size that we would hope for. This simulation is a real problem that scientists have already attacked using the algorithms we compared against, marking a significant milestone and an important foundation for future development. This wouldn’t have been possible today without D-Wave’s lower noise processor.”

“The search for quantum advantage in computations is becoming increasingly lively because there are special problems where genuine progress is being made. These problems may appear somewhat contrived even to physicists, but in this paper from a collaboration between D-Wave Systems, Google, and Simon Fraser University [SFU], it appears that there is an advantage for quantum annealing using a special purpose processor over classical simulations for the more ‘practical’ problem of finding the equilibrium state of a particular quantum magnet,” said Prof. Dr. Gabriel Aeppli, professor of physics at ETH Zürich and EPF Lausanne, and head of the Photon Science Division of the Paul Scherrer Institute. “This comes as a surprise given the belief of many that quantum annealing has no intrinsic advantage over path integral Monte Carlo programs implemented on classical processors.”

“Nascent quantum technologies mature into practical tools only when they leave classical counterparts in the dust in solving real-world problems,” said Hidetoshi Nishimori, Professor, Institute of Innovative Research, Tokyo Institute of Technology. “A key step in this direction has been achieved in this paper by providing clear evidence of a scaling advantage of the quantum annealer over an impregnable classical computing competitor in simulating dynamical properties of a complex material. I send sincere applause to the team.”

“Successfully demonstrating such complex phenomena is, on its own, further proof of the programmability and flexibility of D-Wave’s quantum computer,” said D-Wave CEO Alan Baratz. “But perhaps even more important is the fact that this was not demonstrated on a synthetic or ‘trick’ problem. This was achieved on a real problem in physics against an industry-standard tool for simulation–a demonstration of the practical value of the D-Wave processor. We must always be doing two things: furthering the science and increasing the performance of our systems and technologies to help customers develop applications with real-world business value. This kind of scientific breakthrough from our team is in line with that mission and speaks to the emerging value that it’s possible to derive from quantum computing today.”

The scientific achievements presented in Nature Communications further underpin D-Wave’s ongoing work with world-class customers to develop over 250 early quantum computing applications, with a number piloting in production applications, in diverse industries such as manufacturing, logistics, pharmaceutical, life sciences, retail and financial services. In September 2020, D-Wave brought its next-generation Advantage™ quantum system to market via the Leap™ quantum cloud service. The system includes more than 5,000 qubits and 15-way qubit connectivity, as well as an expanded hybrid solver service capable of running business problems with up to one million variables. The combination of Advantage’s computing power and scale with the hybrid solver service gives businesses the ability to run performant, real-world quantum applications for the first time.

That last paragraph seems more sales pitch than research oriented. It’s not unexpected in a company’s press release but I was surprised that the editors at EurekAlert didn’t remove it.

Here’s a link to and a citation for the latest paper,

Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets by Andrew D. King, Jack Raymond, Trevor Lanting, Sergei V. Isakov, Masoud Mohseni, Gabriel Poulin-Lamarre, Sara Ejtemaee, William Bernoudy, Isil Ozfidan, Anatoly Yu. Smirnov, Mauricio Reis, Fabio Altomare, Michael Babcock, Catia Baron, Andrew J. Berkley, Kelly Boothby, Paul I. Bunyk, Holly Christiani, Colin Enderud, Bram Evert, Richard Harris, Emile Hoskinson, Shuiyuan Huang, Kais Jooya, Ali Khodabandelou, Nicolas Ladizinsky, Ryan Li, P. Aaron Lott, Allison J. R. MacDonald, Danica Marsden, Gaelen Marsden, Teresa Medina, Reza Molavi, Richard Neufeld, Mana Norouzpour, Travis Oh, Igor Pavlov, Ilya Perminov, Thomas Prescott, Chris Rich, Yuki Sato, Benjamin Sheldan, George Sterling, Loren J. Swenson, Nicholas Tsai, Mark H. Volkmann, Jed D. Whittaker, Warren Wilkinson, Jason Yao, Hartmut Neven, Jeremy P. Hilton, Eric Ladizinsky, Mark W. Johnson, Mohammad H. Amin. Nature Communications volume 12, Article number: 1113 (2021) DOI: https://doi.org/10.1038/s41467-021-20901-5 Published: 18 February 2021

This paper is open access.

Quantum annealing and more

Dr. Andrew King, one of the D-Wave researchers, has written a February 18, 2021 article on Medium explaining some of the work. I’ve excerpted one of King’s points,

Insight #1: We observed what actually goes on under the hood in the processor for the first time

Quantum annealing — the approach adopted by D-Wave from the beginning — involves setting up a simple but purely quantum initial state, and gradually reducing the “quantumness” until the system is purely classical. This takes on the order of a microsecond. If you do it right, the classical system represents a hard (NP-complete) computational problem, and the state has evolved to an optimal, or at least near-optimal, solution to that problem.

What happens at the beginning and end of the computation are about as simple as quantum computing gets. But the action in the middle is hard to get a handle on, both theoretically and experimentally. That’s one reason these experiments are so important: they provide high-fidelity measurements of the physical processes at the core of quantum annealing. Our 2018 Nature article introduced the same simulation, but without measuring computation time. To benchmark the experiment this time around, we needed lower-noise hardware (in this case, we used the D-Wave 2000Q lower noise quantum computer), and we needed, strangely, to slow the simulation down. Since the quantum simulation happens so fast, we actually had to make things harder. And we had to find a way to slow down both quantum and classical simulation in an equitable way. The solution? Topological obstruction.

If you have time and the inclination, I encourage you to read King’s piece.

Quantum supremacy

This supremacy, refers to an engineering milestone and a October 23, 2019 news item on ScienceDaily announces the milestone has been reached,

Researchers in UC [University of California] Santa Barbara/Google scientist John Martinis’ group have made good on their claim to quantum supremacy. Using 53 entangled quantum bits (“qubits”), their Sycamore computer has taken on — and solved — a problem considered intractable for classical computers.

An October 23, 2019 UC Santa Barbara news release (also on EurekAlert) by Sonia Fernandez, which originated the news item, delves further into the work,

“A computation that would take 10,000 years on a classical supercomputer took 200 seconds on our quantum computer,” said Brooks Foxen, a graduate student researcher in the Martinis Group. “It is likely that the classical simulation time, currently estimated at 10,000 years, will be reduced by improved classical hardware and algorithms, but, since we are currently 1.5 trillion times faster, we feel comfortable laying claim to this achievement.”

The feat is outlined in a paper in the journal Nature.

The milestone comes after roughly two decades of quantum computing research conducted by Martinis and his group, from the development of a single superconducting qubit to systems including architectures of 72 and, with Sycamore, 54 qubits (one didn’t perform) that take advantage of the both awe-inspiring and bizarre properties of quantum mechanics.

“The algorithm was chosen to emphasize the strengths of the quantum computer by leveraging the natural dynamics of the device,” said Ben Chiaro, another graduate student researcher in the Martinis Group. That is, the researchers wanted to test the computer’s ability to hold and rapidly manipulate a vast amount of complex, unstructured data.

“We basically wanted to produce an entangled state involving all of our qubits as quickly as we can,” Foxen said, “and so we settled on a sequence of operations that produced a complicated superposition state that, when measured, returns bitstring with a probability determined by the specific sequence of operations used to prepare that particular superposition. The exercise, which was to verify that the circuit’s output correspond to the equence used to prepare the state, sampled the quantum circuit a million times in just a few minutes, exploring all possibilities — before the system could lose its quantum coherence.

‘A complex superposition state’

“We performed a fixed set of operations that entangles 53 qubits into a complex superposition state,” Chiaro explained. “This superposition state encodes the probability distribution. For the quantum computer, preparing this superposition state is accomplished by applying a sequence of tens of control pulses to each qubit in a matter of microseconds. We can prepare and then sample from this distribution by measuring the qubits a million times in 200 seconds.”

“For classical computers, it is much more difficult to compute the outcome of these operations because it requires computing the probability of being in any one of the 2^53 possible states, where the 53 comes from the number of qubits — the exponential scaling is why people are interested in quantum computing to begin with,” Foxen said. “This is done by matrix multiplication, which is expensive for classical computers as the matrices become large.”

According to the new paper, the researchers used a method called cross-entropy benchmarking to compare the quantum circuit’s output (a “bitstring”) to its “corresponding ideal probability computed via simulation on a classical computer” to ascertain that the quantum computer was working correctly.

“We made a lot of design choices in the development of our processor that are really advantageous,” said Chiaro. Among these advantages, he said, are the ability to experimentally tune the parameters of the individual qubits as well as their interactions.

While the experiment was chosen as a proof-of-concept for the computer, the research has resulted in a very real and valuable tool: a certified random number generator. Useful in a variety of fields, random numbers can ensure that encrypted keys can’t be guessed, or that a sample from a larger population is truly representative, leading to optimal solutions for complex problems and more robust machine learning applications. The speed with which the quantum circuit can produce its randomized bit string is so great that there is no time to analyze and “cheat” the system.

“Quantum mechanical states do things that go beyond our day-to-day experience and so have the potential to provide capabilities and application that would otherwise be unattainable,” commented Joe Incandela, UC Santa Barbara’s vice chancellor for research. “The team has demonstrated the ability to reliably create and repeatedly sample complicated quantum states involving 53 entangled elements to carry out an exercise that would take millennia to do with a classical supercomputer. This is a major accomplishment. We are at the threshold of a new era of knowledge acquisition.”

Looking ahead

With an achievement like “quantum supremacy,” it’s tempting to think that the UC Santa Barbara/Google researchers will plant their flag and rest easy. But for Foxen, Chiaro, Martinis and the rest of the UCSB/Google AI Quantum group, this is just the beginning.

“It’s kind of a continuous improvement mindset,” Foxen said. “There are always projects in the works.” In the near term, further improvements to these “noisy” qubits may enable the simulation of interesting phenomena in quantum mechanics, such as thermalization, or the vast amount of possibility in the realms of materials and chemistry.

In the long term, however, the scientists are always looking to improve coherence times, or, at the other end, to detect and fix errors, which would take many additional qubits per qubit being checked. These efforts have been running parallel to the design and build of the quantum computer itself, and ensure the researchers have a lot of work before hitting their next milestone.

“It’s been an honor and a pleasure to be associated with this team,” Chiaro said. “It’s a great collection of strong technical contributors with great leadership and the whole team really synergizes well.”

Here’s a link to and a citation for the paper,

Quantum supremacy using a programmable superconducting processor by Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandrà, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven & John M. Martinis. Nature volume 574, pages505–510 (2019) DOI: https://doi.org/10.1038/s41586-019-1666-5 Issue Date 24 October 2019

This paper appears to be open access.

Seeing the future with quantum computing

Researchers at the University of Sydney (Australia) have demonstrated the ability to see the ‘quantum future’ according to a Jan. 16, 2017 news item on ScienceDaily,

Scientists at the University of Sydney have demonstrated the ability to “see” the future of quantum systems, and used that knowledge to preempt their demise, in a major achievement that could help bring the strange and powerful world of quantum technology closer to reality.

The applications of quantum-enabled technologies are compelling and already demonstrating significant impacts — especially in the realm of sensing and metrology. And the potential to build exceptionally powerful quantum computers using quantum bits, or qubits, is driving investment from the world’s largest companies.

However a significant obstacle to building reliable quantum technologies has been the randomisation of quantum systems by their environments, or decoherence, which effectively destroys the useful quantum character.

The physicists have taken a technical quantum leap in addressing this, using techniques from big data to predict how quantum systems will change and then preventing the system’s breakdown from occurring.

A Jan. 14, 2017 University of Sydney press release (also on EurekAlert), which originated the news item, expands on the theme,

“Much the way the individual components in mobile phones will eventually fail, so too do quantum systems,” said the paper’s senior author Professor Michael J.  Biercuk.

“But in quantum technology the lifetime is generally measured in fractions of a second, rather than years.”

Professor Biercuk, from the University of Sydney’s School of Physics and a chief investigator at the Australian Research Council’s Centre of Excellence for Engineered Quantum Systems, said his group had demonstrated it was possible to suppress decoherence in a preventive manner. The key was to develop a technique to predict how the system would disintegrate.

Professor Biercuk highlighted the challenges of making predictions in a quantum world: “Humans routinely employ predictive techniques in our daily experience; for instance, when we play tennis we predict where the ball will end up based on observations of the airborne ball,” he said.

“This works because the rules that govern how the ball will move, like gravity, are regular and known.  But what if the rules changed randomly while the ball was on its way to you?  In that case it’s next to impossible to predict the future behavior of that ball.

“And yet this situation is exactly what we had to deal with because the disintegration of quantum systems is random. Moreover, in the quantum realm observation erases quantumness, so our team needed to be able to guess how and when the system would randomly break.

“We effectively needed to swing at the randomly moving tennis ball while blindfolded.”

The team turned to machine learning for help in keeping their quantum systems – qubits realised in trapped atoms – from breaking.

What might look like random behavior actually contained enough information for a computer program to guess how the system would change in the future. It could then predict the future without direct observation, which would otherwise erase the system’s useful characteristics.

The predictions were remarkably accurate, allowing the team to use their guesses preemptively to compensate for the anticipated changes.

Doing this in real time allowed the team to prevent the disintegration of the quantum character, extending the useful lifetime of the qubits.

“We know that building real quantum technologies will require major advances in our ability to control and stabilise qubits – to make them useful in applications,” Professor Biercuk said.

Our techniques apply to any qubit, built in any technology, including the special superconducting circuits being used by major corporations.

“We’re excited to be developing new capabilities that turn quantum systems from novelties into useful technologies. The quantum future is looking better all the time,” Professor Biercuk said.

Here’s a link to and a  citation for the paper,

Prediction and real-time compensation of qubit decoherence via machine learning by Sandeep Mavadia, Virginia Frey, Jarrah Sastrawan, Stephen Dona, & Michael J. Biercuk. Nature Communications 8, Article number: 14106 (2017) doi:10.1038/ncomms14106 Published online: 16 January 2017

This paper is open access.

Connecting chaos and entanglement

Researchers seem to have stumbled across a link between classical and quantum physics. A July 12, 2016 University of California at Santa Barbara (UCSB) news release (also on EurekAlert) by Sonia Fernandez provides a description of both classical and quantum physics, as well as, the research that connects the two,

Using a small quantum system consisting of three superconducting qubits, researchers at UC Santa Barbara and Google have uncovered a link between aspects of classical and quantum physics thought to be unrelated: classical chaos and quantum entanglement. Their findings suggest that it would be possible to use controllable quantum systems to investigate certain fundamental aspects of nature.

“It’s kind of surprising because chaos is this totally classical concept — there’s no idea of chaos in a quantum system,” Charles Neill, a researcher in the UCSB Department of Physics and lead author of a paper that appears in Nature Physics. “Similarly, there’s no concept of entanglement within classical systems. And yet it turns out that chaos and entanglement are really very strongly and clearly related.”

Initiated in the 15th century, classical physics generally examines and describes systems larger than atoms and molecules. It consists of hundreds of years’ worth of study including Newton’s laws of motion, electrodynamics, relativity, thermodynamics as well as chaos theory — the field that studies the behavior of highly sensitive and unpredictable systems. One classic example of chaos theory is the weather, in which a relatively small change in one part of the system is enough to foil predictions — and vacation plans — anywhere on the globe.

At smaller size and length scales in nature, however, such as those involving atoms and photons and their behaviors, classical physics falls short. In the early 20th century quantum physics emerged, with its seemingly counterintuitive and sometimes controversial science, including the notions of superposition (the theory that a particle can be located in several places at once) and entanglement (particles that are deeply linked behave as such despite physical distance from one another).

And so began the continuing search for connections between the two fields.

All systems are fundamentally quantum systems, according [to] Neill, but the means of describing in a quantum sense the chaotic behavior of, say, air molecules in an evacuated room, remains limited.

Imagine taking a balloon full of air molecules, somehow tagging them so you could see them and then releasing them into a room with no air molecules, noted co-author and UCSB/Google researcher Pedram Roushan. One possible outcome is that the air molecules remain clumped together in a little cloud following the same trajectory around the room. And yet, he continued, as we can probably intuit, the molecules will more likely take off in a variety of velocities and directions, bouncing off walls and interacting with each other, resting after the room is sufficiently saturated with them.

“The underlying physics is chaos, essentially,” he said. The molecules coming to rest — at least on the macroscopic level — is the result of thermalization, or of reaching equilibrium after they have achieved uniform saturation within the system. But in the infinitesimal world of quantum physics, there is still little to describe that behavior. The mathematics of quantum mechanics, Roushan said, do not allow for the chaos described by Newtonian laws of motion.

To investigate, the researchers devised an experiment using three quantum bits, the basic computational units of the quantum computer. Unlike classical computer bits, which utilize a binary system of two possible states (e.g., zero/one), a qubit can also use a superposition of both states (zero and one) as a single state. Additionally, multiple qubits can entangle, or link so closely that their measurements will automatically correlate. By manipulating these qubits with electronic pulses, Neill caused them to interact, rotate and evolve in the quantum analog of a highly sensitive classical system.

The result is a map of entanglement entropy of a qubit that, over time, comes to strongly resemble that of classical dynamics — the regions of entanglement in the quantum map resemble the regions of chaos on the classical map. The islands of low entanglement in the quantum map are located in the places of low chaos on the classical map.

“There’s a very clear connection between entanglement and chaos in these two pictures,” said Neill. “And, it turns out that thermalization is the thing that connects chaos and entanglement. It turns out that they are actually the driving forces behind thermalization.

“What we realize is that in almost any quantum system, including on quantum computers, if you just let it evolve and you start to study what happens as a function of time, it’s going to thermalize,” added Neill, referring to the quantum-level equilibration. “And this really ties together the intuition between classical thermalization and chaos and how it occurs in quantum systems that entangle.”

The study’s findings have fundamental implications for quantum computing. At the level of three qubits, the computation is relatively simple, said Roushan, but as researchers push to build increasingly sophisticated and powerful quantum computers that incorporate more qubits to study highly complex problems that are beyond the ability of classical computing — such as those in the realms of machine learning, artificial intelligence, fluid dynamics or chemistry — a quantum processor optimized for such calculations will be a very powerful tool.

“It means we can study things that are completely impossible to study right now, once we get to bigger systems,” said Neill.

Experimental link between quantum entanglement (left) and classical chaos (right) found using a small quantum computer. Photo Credit: Courtesy Image (Courtesy: UCSB)

Experimental link between quantum entanglement (left) and classical chaos (right) found using a small quantum computer. Photo Credit: Courtesy Image (Courtesy: UCSB)

Here’s a link to and a citation for the paper,

Ergodic dynamics and thermalization in an isolated quantum system by C. Neill, P. Roushan, M. Fang, Y. Chen, M. Kolodrubetz, Z. Chen, A. Megrant, R. Barends, B. Campbell, B. Chiaro, A. Dunsworth, E. Jeffrey, J. Kelly, J. Mutus, P. J. J. O’Malley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, A. Polkovnikov, & J. M. Martinis. Nature Physics (2016)  doi:10.1038/nphys3830 Published online 11 July 2016

This paper is behind a paywall.

D-Wave passes 1000-qubit barrier

A local (Vancouver, Canada-based, quantum computing company, D-Wave is making quite a splash lately due to a technical breakthrough.  h/t’s Speaking up for Canadian Science for Business in Vancouver article and Nanotechnology Now for Harris & Harris Group press release and Economist article.

A June 22, 2015 article by Tyler Orton for Business in Vancouver describes D-Wave’s latest technical breakthrough,

“This updated processor will allow significantly more complex computational problems to be solved than ever before,” Jeremy Hilton, D-Wave’s vice-president of processor development, wrote in a June 22 [2015] blog entry.

Regular computers use two bits – ones and zeroes – to make calculations, while quantum computers rely on qubits.

Qubits possess a “superposition” that allow it to be one and zero at the same time, meaning it can calculate all possible values in a single operation.

But the algorithm for a full-scale quantum computer requires 8,000 qubits.

A June 23, 2015 Harris & Harris Group press release adds more information about the breakthrough,

Harris & Harris Group, Inc. (Nasdaq: TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that it has successfully fabricated 1,000 qubit processors that power its quantum computers.  D-Wave’s quantum computer runs a quantum annealing algorithm to find the lowest points, corresponding to optimal or near optimal solutions, in a virtual “energy landscape.”  Every additional qubit doubles the search space of the processor.  At 1,000 qubits, the new processor considers 21000 possibilities simultaneously, a search space which is substantially larger than the 2512 possibilities available to the company’s currently available 512 qubit D-Wave Two. In fact, the new search space contains far more possibilities than there are particles in the observable universe.

A June 22, 2015 D-Wave news release, which originated the technical details about the breakthrough found in the Harris & Harris press release, provides more information along with some marketing hype (hyperbole), Note: Links have been removed,

As the only manufacturer of scalable quantum processors, D-Wave breaks new ground with every succeeding generation it develops. The new processors, comprising over 128,000 Josephson tunnel junctions, are believed to be the most complex superconductor integrated circuits ever successfully yielded. They are fabricated in part at D-Wave’s facilities in Palo Alto, CA and at Cypress Semiconductor’s wafer foundry located in Bloomington, Minnesota.

“Temperature, noise, and precision all play a profound role in how well quantum processors solve problems.  Beyond scaling up the technology by doubling the number of qubits, we also achieved key technology advances prioritized around their impact on performance,” said Jeremy Hilton, D-Wave vice president, processor development. “We expect to release benchmarking data that demonstrate new levels of performance later this year.”

The 1000-qubit milestone is the result of intensive research and development by D-Wave and reflects a triumph over a variety of design challenges aimed at enhancing performance and boosting solution quality. Beyond the much larger number of qubits, other significant innovations include:

  •  Lower Operating Temperature: While the previous generation processor ran at a temperature close to absolute zero, the new processor runs 40% colder. The lower operating temperature enhances the importance of quantum effects, which increases the ability to discriminate the best result from a collection of good candidates.​
  • Reduced Noise: Through a combination of improved design, architectural enhancements and materials changes, noise levels have been reduced by 50% in comparison to the previous generation. The lower noise environment enhances problem-solving performance while boosting reliability and stability.
  • Increased Control Circuitry Precision: In the testing to date, the increased precision coupled with the noise reduction has demonstrated improved precision by up to 40%. To accomplish both while also improving manufacturing yield is a significant achievement.
  • Advanced Fabrication:  The new processors comprise over 128,000 Josephson junctions (tunnel junctions with superconducting electrodes) in a 6-metal layer planar process with 0.25μm features, believed to be the most complex superconductor integrated circuits ever built.
  • New Modes of Use: The new technology expands the boundaries of ways to exploit quantum resources.  In addition to performing discrete optimization like its predecessor, firmware and software upgrades will make it easier to use the system for sampling applications.

“Breaking the 1000 qubit barrier marks the culmination of years of research and development by our scientists, engineers and manufacturing team,” said D-Wave CEO Vern Brownell. “It is a critical step toward bringing the promise of quantum computing to bear on some of the most challenging technical, commercial, scientific, and national defense problems that organizations face.”

A June 20, 2015 article in The Economist notes there is now commercial interest as it provides good introductory information about quantum computing. The article includes an analysis of various research efforts in Canada (they mention D-Wave), the US, and the UK. These excerpts don’t do justice to the article but will hopefully whet your appetite or provide an overview for anyone with limited time,

A COMPUTER proceeds one step at a time. At any particular moment, each of its bits—the binary digits it adds and subtracts to arrive at its conclusions—has a single, definite value: zero or one. At that moment the machine is in just one state, a particular mixture of zeros and ones. It can therefore perform only one calculation next. This puts a limit on its power. To increase that power, you have to make it work faster.

But bits do not exist in the abstract. Each depends for its reality on the physical state of part of the computer’s processor or memory. And physical states, at the quantum level, are not as clear-cut as classical physics pretends. That leaves engineers a bit of wriggle room. By exploiting certain quantum effects they can create bits, known as qubits, that do not have a definite value, thus overcoming classical computing’s limits.

… The biggest question is what the qubits themselves should be made from.

A qubit needs a physical system with two opposite quantum states, such as the direction of spin of an electron orbiting an atomic nucleus. Several things which can do the job exist, and each has its fans. Some suggest nitrogen atoms trapped in the crystal lattices of diamonds. Calcium ions held in the grip of magnetic fields are another favourite. So are the photons of which light is composed (in this case the qubit would be stored in the plane of polarisation). And quasiparticles, which are vibrations in matter that behave like real subatomic particles, also have a following.

The leading candidate at the moment, though, is to use a superconductor in which the qubit is either the direction of a circulating current, or the presence or absence of an electric charge. Both Google and IBM are banking on this approach. It has the advantage that superconducting qubits can be arranged on semiconductor chips of the sort used in existing computers. That, the two firms think, should make them easier to commercialise.

Google is also collaborating with D-Wave of Vancouver, Canada, which sells what it calls quantum annealers. The field’s practitioners took much convincing that these devices really do exploit the quantum advantage, and in any case they are limited to a narrower set of problems—such as searching for images similar to a reference image. But such searches are just the type of application of interest to Google. In 2013, in collaboration with NASA and USRA, a research consortium, the firm bought a D-Wave machine in order to put it through its paces. Hartmut Neven, director of engineering at Google Research, is guarded about what his team has found, but he believes D-Wave’s approach is best suited to calculations involving fewer qubits, while Dr Martinis and his colleagues build devices with more.

It’s not clear to me if the writers at The Economist were aware of  D-Wave’s latest breakthrough at the time of writing but I think not. In any event, they (The Economist writers) have included a provocative tidbit about quantum encryption,

Documents released by Edward Snowden, a whistleblower, revealed that the Penetrating Hard Targets programme of America’s National Security Agency was actively researching “if, and how, a cryptologically useful quantum computer can be built”. In May IARPA [Intellligence Advanced Research Projects Agency], the American government’s intelligence-research arm, issued a call for partners in its Logical Qubits programme, to make robust, error-free qubits. In April, meanwhile, Tanja Lange and Daniel Bernstein of Eindhoven University of Technology, in the Netherlands, announced PQCRYPTO, a programme to advance and standardise “post-quantum cryptography”. They are concerned that encrypted communications captured now could be subjected to quantum cracking in the future. That means strong pre-emptive encryption is needed immediately.

I encourage you to read the Economist article.

Two final comments. (1) The latest piece, prior to this one, about D-Wave was in a Feb. 6, 2015 posting about then new investment into the company. (2) A Canadian effort in the field of quantum cryptography was mentioned in a May 11, 2015 posting (scroll down about 50% of the way) featuring a profile of Raymond Laflamme, at the University of Waterloo’s Institute of Quantum Computing in the context of an announcement about science media initiative Research2Reality.

Solving an iridescent mystery could lead to quantum transistors

iridescence has fascinated me (and scores of other people) since early childhood and it’s fascinating to note that scientists seems almost as enchanted as we amateurs are. The latest bit of ‘iridescent’ news comes from the University of Michigan in a Dec. 5, 2014 news item on ScienceDaily,

An odd, iridescent material that’s puzzled physicists for decades turns out to be an exotic state of matter that could open a new path to quantum computers and other next-generation electronics.

Physicists at the University of Michigan have discovered or confirmed several properties of the compound samarium hexaboride that raise hopes for finding the silicon of the quantum era. They say their results also close the case of how to classify the material–a mystery that has been investigated since the late 1960s.

A Dec. 5, 2014 University of Michigan news release, which originated the news item, provides more details about the mystery and the efforts to resolve it,

The researchers provide the first direct evidence that samarium hexaboride, abbreviated SmB6, is a topological insulator. Topological insulators are, to physicists, an exciting class of solids that conduct electricity like a metal across their surface, but block the flow of current like rubber through their interior. They behave in this two-faced way despite that their chemical composition is the same throughout.

The U-M scientists used a technique called torque magnetometry to observe tell-tale oscillations in the material’s response to a magnetic field that reveal how electric current moves through it. Their technique also showed that the surface of samarium hexaboride holds rare Dirac electrons, particles with the potential to help researchers overcome one of the biggest hurdles in quantum computing.

These properties are particularly enticing to scientists because SmB6 is considered a strongly correlated material. Its electrons interact more closely with one another than most solids. This helps its interior maintain electricity-blocking behavior.

This deeper understanding of samarium hexaboride raises the possibility that engineers might one day route the flow of electric current in quantum computers like they do on silicon in conventional electronics, said Lu Li, assistant professor of physics in the College of Literature, Science, and the Arts and a co-author of a paper on the findings published in Science.

“Before this, no one had found Dirac electrons in a strongly correlated material,” Li said. “We thought strong correlation would hurt them, but now we know it doesn’t. While I don’t think this material is the answer, now we know that this combination of properties is possible and we can look for other candidates.”

The drawback of samarium hexaboride is that the researchers only observed these behaviors at ultracold temperatures.

Quantum computers use particles like atoms or electrons to perform processing and memory tasks. They could offer dramatic increases in computing power due to their ability to carry out scores of calculations at once. Because they could factor numbers much faster than conventional computers, they would greatly improve computer security.

In quantum computers, “qubits” stand in for the 0s and 1s of conventional computers’ binary code. While a conventional bit can be either a 0 or a 1, a qubit could be both at the same time—only until you measure it, that is. Measuring a quantum system forces it to pick one state, which eliminates its main advantage.

Dirac electrons, named after the English physicist whose equations describe their behavior, straddle the realms of classical and quantum physics, Li said. Working together with other materials, they could be capable of clumping together into a new kind of qubit that would change the properties of a material in a way that could be measured indirectly, without the qubit sensing it. The qubit could remain in both states.

While these applications are intriguing, the researchers are most enthusiastic about the fundamental science they’ve uncovered.

“In the science business you have concepts that tell you it should be this or that and when it’s two things at once, that’s a sign you have something interesting to find,” said Jim Allen, an emeritus professor of physics who studied samarium hexaboride for 30 years. “Mysteries are always intriguing to people who do curiosity-driven research.”

Allen thought for years that samarium hexaboride must be a flawed insulator that behaved like a metal at low temperatures because of defects and impurities, but he couldn’t align that with all of its other properties.

“The prediction several years ago about it being a topological insulator makes a lightbulb go off if you’re an old guy like me and you’ve been living with this stuff your whole life,” Allen said.

In 2010, Kai Sun, assistant professor of physics at U-M, led a group that first posited that SmB6 might be a topological insulator. He and Allen were also involved in seminal U-M experiments led by physics professor Cagliyan Kurdak in 2012 that showed indirectly that the hypothesis was correct.

“But the scientific community is always critical,” Sun said. “They want very strong evidence. We think this experiment finally provides direct proof of our theory.”

Here’s a link to and a citation for the researchers’ latest paper,

Two-dimensional Fermi surfaces in Kondo insulator SmB6 by G. Li, Z. Xiang, F. Yu, T. Asaba, B. Lawson, P. Cai1, C. Tinsman, A. Berkley, S. Wolgast, Y. S. Eo, Dae-Jeong Kim, C. Kurdak, J. W. Allen, K. Sun, X. H. Chen, Y. Y. Wang, Z. Fisk, and Lu Li. Science 5 December 2014: Vol. 346 no. 6214 pp. 1208-1212 DOI: 10.1126/science.1250366

This paper is behind a paywall.