Tag Archives: qubits

Microsoft, D-Wave Systems, quantum computing, and quantum supremacy?

Before diving into some of the latest quantum computing doings, here’s why quantum computing is so highly prized and chased after, from the Quantum supremacy Wikipedia entry, Note: Links have been removed,

In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum computer can solve a problem that no classical computer can solve in any feasible amount of time, irrespective of the usefulness of the problem.[1][2][3] The term was coined by John Preskill in 2011,[1][4] but the concept dates to Yuri Manin’s 1980[5] and Richard Feynman’s 1981[6] proposals of quantum computing.

Quantum supremacy and quantum advantage have been mentioned a few times here over the years. You can check my March 6, 2020 posting for when researchers from the University of California at Santa Barbara claimed quantum supremacy and my July 31, 2023 posting for when D-Wave Systems claimed a quantum advantage on optimization problems. I’d understood quantum supremacy and quantum advantage to be synonymous but according the article in Betakit (keep scrolling down to the D-Wave subhead and then, to ‘A controversy of sorts’ subhead in this posting), that’s not so.

The latest news on the quantum front comes from Microsoft (February 2025) and D-Wave systems (March 2025).

Microsoft claims a new state of matter for breakthroughs in quantum computing

Here’s the February 19, 2025 news announcement from Microsoft’s Chetan Nayak, Technical Fellow and Corporate Vice President of Quantum Hardware, Note: Links have been removed,

Quantum computers promise to transform science and society—but only after they achieve the scale that once seemed distant and elusive, and their reliability is ensured by quantum error correction. Today, we’re announcing rapid advancements on the path to useful quantum computing:

  • Majorana 1: the world’s first Quantum Processing Unit (QPU) powered by a Topological Core, designed to scale to a million qubits on a single chip.
  • A hardware-protected topological qubit: research published today in Nature, along with data shared at the Station Q meeting, demonstrate our ability to harness a new type of material and engineer a radically different type of qubit that is small, fast, and digitally controlled.
  • A device roadmap to reliable quantum computation: our path from single-qubit devices to arrays that enable quantum error correction.
  • Building the world’s first fault-tolerant prototype (FTP) based on topological qubits: Microsoft is on track to build an FTP of a scalable quantum computer—in years, not decades—as part of the final phase of the Defense Advanced Research Projects Agency (DARPA) Underexplored Systems for Utility-Scale Quantum Computing (US2QC) program.

Together, these milestones mark a pivotal moment in quantum computing as we advance from scientific exploration to technological innovation.

Harnessing a new type of material

All of today’s announcements build on our team’s recent breakthrough: the world’s first topoconductor. This revolutionary class of materials enables us to create topological superconductivity, a new state of matter that previously existed only in theory. The advance stems from Microsoft’s innovations in the design and fabrication of gate-defined devices that combine indium arsenide (a semiconductor) and aluminum (a superconductor). When cooled to near absolute zero and tuned with magnetic fields, these devices form topological superconducting nanowires with Majorana Zero Modes (MZMs) at the wires’ ends.

Chris Vallance’s February 19, 2025 article for the British Broadcasting Corporation (BBC) news online website provides a description of Microsoft’s claims and makes note of the competitive quantum research environment,

Microsoft has unveiled a new chip called Majorana 1 that it says will enable the creation of quantum computers able to solve “meaningful, industrial-scale problems in years, not decades”.

It is the latest development in quantum computing – tech which uses principles of particle physics to create a new type of computer able to solve problems ordinary computers cannot.

Creating quantum computers powerful enough to solve important real-world problems is very challenging – and some experts believe them to be decades away.

Microsoft says this timetable can now be sped up because of the “transformative” progress it has made in developing the new chip involving a “topological conductor”, based on a new material it has produced.

The firm believes its topoconductor has the potential to be as revolutionary as the semiconductor was in the history of computing.

But experts have told the BBC more data is needed before the significance of the new research – and its effect on quantum computing – can be fully assessed.

Jensen Huang – boss of the leading chip firm, Nvidia – said in January he believed “very useful” quantum computing would come in 20 years.

Chetan Nayak, a technical fellow of quantum hardware at Microsoft, said he believed the developments would shake up conventional thinking about the future of quantum computers.

“Many people have said that quantum computing, that is to say useful quantum computers, are decades away,” he said. “I think that this brings us into years rather than decades.”

Travis Humble, director of the Quantum Science Center of Oak Ridge National Laboratory in the US, said he agreed Microsoft would now be able to deliver prototypes faster – but warned there remained work to do.

“The long term goals for solving industrial applications on quantum computers will require scaling up these prototypes even further,” he said.

While rivals produced a steady stream of announcements – notably Google’s “Willow” at the end of 2024 – Microsoft seemed to be taking longer.

Pursuing this approach was, in the company’s own words, a “high-risk, high-rewards” strategy, but one it now believes is going to pay off.

If you have the time, do read Vallance’s February 19, 2025 article.

The research paper

Purdue University’s (Indiana, US) February 25, 2025 news release on EurekAlert announces publication of the research, Note: Links have been removed,

Microsoft Quantum published an article in Nature on Feb. 19 [2025] detailing recent advances in the measurement of quantum devices that will be needed to realize a topological quantum computer. Among the authors are Microsoft scientists and engineers who conduct research at Microsoft Quantum Lab West Lafayette, located at Purdue University. In an announcement by Microsoft Quantum, the team describes the operation of a device that is a necessary building block for a topological quantum computer. The published results are an important milestone along the path to construction of quantum computers that are potentially more robust and powerful than existing technologies.

“Our hope for quantum computation is that it will aid chemists, materials scientists and engineers working on the design and manufacturing of new materials that are so important to our daily lives,” said Michael Manfra, scientific director of Microsoft Quantum Lab West Lafayette and the Bill and Dee O’Brien Distinguished Professor of Physics and Astronomy, professor of materials engineering, and professor of electrical and computer engineering at Purdue. “The promise of quantum computation is in accelerating scientific discovery and its translation into useful technology. For example, if quantum computers reduce the time and cost to produce new lifesaving therapeutic drugs, that is real societal impact.” 

The Microsoft Quantum Lab West Lafayette team advanced the complex layered materials that make up the quantum plane of the full device architecture used in the tests. Microsoft scientists working with Manfra are experts in advanced semiconductor growth techniques, including molecular beam epitaxy, that are used to build low-dimensional electron systems that form the basis for quantum bits, or qubits. They built the semiconductor and superconductor layers with atomic layer precision, tailoring the material’s properties to those needed for the device architecture.

Manfra, a member of the Purdue Quantum Science and Engineering Institute, credited the strong relationship between Purdue and Microsoft, built over the course of a decade, with the advances conducted at Microsoft Quantum Lab West Lafayette. In 2017 Purdue deepened its relationship with Microsoft with a multiyear agreement that includes embedding Microsoft employees with Manfra’s research team at Purdue.

“This was a collaborative effort by a very sophisticated team, with a vital contribution from the Microsoft scientists at Purdue,” Manfra said. “It’s a Microsoft team achievement, but it’s also the culmination of a long-standing partnership between Purdue and Microsoft. It wouldn’t have been possible without an environment at Purdue that was conducive to this mode of work — I attempted to blend industrial with academic research to the betterment of both communities. I think that’s a success story.”

Quantum science and engineering at Purdue is a pillar of the Purdue Computes initiative, which is focused on advancing research in computing, physical AI, semiconductors and quantum technologies.

“This research breakthrough in the measurement of the state of quasi particles is a milestone in the development of topological quantum computing, and creates a watershed moment in the semiconductor-superconductor hybrid structure,” Purdue President Mung Chiang said. “Marking also the latest success in the strategic initiative of Purdue Computes, the deep collaboration that Professor Manfra and his team have created with the Microsoft Quantum Lab West Lafayette on the Purdue campus exemplifies the most impactful industry research partnership at any American university today.”

Most approaches to quantum computers rely on local degrees of freedom to encode information. The spin of an electron is a classic example of a qubit. But an individual spin is prone to disturbance — by relatively common things like heat, vibrations or interactions with other quantum particles — which can corrupt quantum information stored in the qubit, necessitating a great deal of effort in detecting and correcting errors. Instead of spin, topological quantum computers store information in a more distributed manner; the qubit state is encoded in the state of many particles acting in concert. Consequently, it is harder to scramble the information as the state of all the particles must be changed to alter the qubit state.

In the Nature paper, the Microsoft team was able to accurately and quickly measure the state of quasi particles that form the basis of the qubit.

“The device is used to measure a basic property of a topological qubit quickly,” Manfra said. “The team is excited to build on these positive results.”

“The team in West Lafayette pushed existing epitaxial technology to a new state-of-the-art for semiconductor-superconductor hybrid structures to ensure a perfect interface between each of the building blocks of the Microsoft hybrid system,” said Sergei Gronin, a Microsoft Quantum Lab scientist.

“The materials quality that is required for quantum computing chips necessitates constant improvements, so that’s one of the biggest challenges,” Gronin said. “First, we had to adjust and improve semiconductor technology to meet a new level that nobody was able to achieve before. But equally important was how to create this hybrid system. To do that, we had to merge a semiconducting part and a superconducting part. And that means you need to perfect the semiconductor and the superconductor and perfect the interface between them.”

While work discussed in the Nature article was performed by Microsoft employees, the exposure to industrial-scale research and development is an outstanding opportunity for Purdue students in Manfra’s academic group as well. John Watson, Geoffrey Gardner and Saeed Fallahi, who are among the coauthors of the paper, earned their doctoral degrees under Manfra and now work for Microsoft Quantum at locations in Redmond, Washington, and Copenhagen, Denmark. Most of Manfra’s former students now work for quantum computing companies, including Microsoft. Tyler Lindemann, who works in the West Lafayette lab and helped to build the hybrid semiconductor-superconductor structures required for the device, is earning a doctoral degree from Purdue under Manfra’s supervision.

“Working in Professor Manfra’s lab in conjunction with my work for Microsoft Quantum has given me a head start in my professional development, and been fruitful for my academic work,” Lindemann said. “At the same time, many of the world-class scientists and engineers at Microsoft Quantum have some background in academia, and being able to draw from their knowledge and experience is an indispensable resource in my graduate studies. From both perspectives, it’s a great opportunity.”

Here’s a link to and a citation for the paper,

Interferometric single-shot parity measurement in InAs–Al hybrid devices by Microsoft Azure Quantum, Morteza Aghaee, Alejandro Alcaraz Ramirez, Zulfi Alam, Rizwan Ali, Mariusz Andrzejczuk, Andrey Antipov, Mikhail Astafev, Amin Barzegar, Bela Bauer, Jonathan Becker, Umesh Kumar Bhaskar, Alex Bocharov, Srini Boddapati, David Bohn, Jouri Bommer, Leo Bourdet, Arnaud Bousquet, Samuel Boutin, Lucas Casparis, Benjamin J. Chapman, Sohail Chatoor, Anna Wulff Christensen, Cassandra Chua, Patrick Codd, William Cole, Paul Cooper, Fabiano Corsetti, Ajuan Cui, Paolo Dalpasso, Juan Pablo Dehollain, Gijs de Lange, Michiel de Moor, Andreas Ekefjärd, Tareq El Dandachi, Juan Carlos Estrada Saldaña, Saeed Fallahi, Luca Galletti, Geoff Gardner, Deshan Govender, Flavio Griggio, Ruben Grigoryan, Sebastian Grijalva, Sergei Gronin, Jan Gukelberger, Marzie Hamdast, Firas Hamze, Esben Bork Hansen, Sebastian Heedt, Zahra Heidarnia, Jesús Herranz Zamorano, Samantha Ho, Laurens Holgaard, John Hornibrook, Jinnapat Indrapiromkul, Henrik Ingerslev, Lovro Ivancevic, Thomas Jensen, Jaspreet Jhoja, Jeffrey Jones, Konstantin V. Kalashnikov, Ray Kallaher, Rachpon Kalra, Farhad Karimi, Torsten Karzig, Evelyn King, Maren Elisabeth Kloster, Christina Knapp, Dariusz Kocon, Jonne V. Koski, Pasi Kostamo, Mahesh Kumar, Tom Laeven, Thorvald Larsen, Jason Lee, Kyunghoon Lee, Grant Leum, Kongyi Li, Tyler Lindemann, Matthew Looij, Julie Love, Marijn Lucas, Roman Lutchyn, Morten Hannibal Madsen, Nash Madulid, Albert Malmros, Michael Manfra, Devashish Mantri, Signe Brynold Markussen, Esteban Martinez, Marco Mattila, Robert McNeil, Antonio B. Mei, Ryan V. Mishmash, Gopakumar Mohandas, Christian Mollgaard, Trevor Morgan, George Moussa, Chetan Nayak, Jens Hedegaard Nielsen, Jens Munk Nielsen, William Hvidtfelt Padkar Nielsen, Bas Nijholt, Mike Nystrom, Eoin O’Farrell, Thomas Ohki, Keita Otani, Brian Paquelet Wütz, Sebastian Pauka, Karl Petersson, Luca Petit, Dima Pikulin, Guen Prawiroatmodjo, Frank Preiss, Eduardo Puchol Morejon, Mohana Rajpalke, Craig Ranta, Katrine Rasmussen, David Razmadze, Outi Reentila, David J. Reilly, Yuan Ren, Ken Reneris, Richard Rouse, Ivan Sadovskyy, Lauri Sainiemi, Irene Sanlorenzo, Emma Schmidgall, Cristina Sfiligoj, Mustafeez Bashir Shah, Kevin Simoes, Shilpi Singh, Sarat Sinha, Thomas Soerensen, Patrick Sohr, Tomas Stankevic, Lieuwe Stek, Eric Stuppard, Henri Suominen, Judith Suter, Sam Teicher, Nivetha Thiyagarajah, Raj Tholapi, Mason Thomas, Emily Toomey, Josh Tracy, Michelle Turley, Shivendra Upadhyay, Ivan Urban, Kevin Van Hoogdalem, David J. Van Woerkom, Dmitrii V. Viazmitinov, Dominik Vogel, John Watson, Alex Webster, Joseph Weston, Georg W. Winkler, Di Xu, Chung Kai Yang, Emrah Yucelen, Roland Zeisel, Guoji Zheng & Justin Zilke. Nature 638, 651–655 (2025). DOI: https://doi.org/10.1038/s41586-024-08445-2 Published online: 19 February 2025 Issue Date: 20 February 2025

This paper is open access. Note: I usually tag all of the authors but not this time.

Controversy over this and previous Microsoft quantum computing claims

Elizabeth Hlavinka’s March 17, 2025 article for Salon.com provides an overview, Note: Links have been removed,

The matter making up the world around us has long-since been organized into three neat categories: solids, liquids and gases. But last month [February 2025], Microsoft announced that it had allegedly discovered another state of matter originally theorized to exist in 1937. 

This new state of matter called the Majorana zero mode is made up of quasiparticles, which act as their own particle and antiparticle. The idea is that the Majorana zero mode could be used to build a quantum computer, which could help scientists answer complex questions that standard computers are not capable of solving, with implications for medicine, cybersecurity and artificial intelligence.

In late February [2025], Sen. Ted Cruz presented Microsoft’s new computer chip at a congressional hearing, saying, “Technologies like this new chip I hold in the palm of my hand, the Majorana 1 quantum chip, are unlocking a new era of computing that will transform industries from health care to energy, solving problems that today’s computers simply cannot.”

However, Microsoft’s announcement, claiming a “breakthrough in quantum computing,” was met with skepticism from some physicists in the field. Proving that this form of quantum computing can work requires first demonstrating the existence of Majorana quasiparticles, measuring what the Majorana particles are doing, and creating something called a topological qubit used to store quantum information.

But some say that not all of the data necessary to prove this has been included in the research paper published in Nature, on which this announcement is based. And due to a fraught history of similar claims from the company being disputed and ultimately rescinded, some are extra wary of the results. [emphasis mine]

It’s not the first time Microsoft has faced backlash from presenting findings in the field. In 2018, the company reported that they had detected the presence of Majorana zero-modes in a research paper, but it was retracted by Nature, the journal that published it after a report from independent experts put their findings under more intense scrutiny.

In the [2018] report, four physicists not involved in the research concluded that it did not appear that Microsoft had intentionally misrepresented the data, but instead seemed to be “caught up in the excitement of the moment [emphasis mine].”

Establishing the existence of these particles is extremely complex in part because disorder in the device can create signals that mimic these quasiparticles when they are not actually there. 

Modern computers in use today are encoded in bits, which can either be in a zero state (no current flowing through them), or a one state (current flowing.) These bits work together to send information and signals that communicate with the computer, powering everything from cell phones to video games.

Companies like Google, IBM and Amazon have invested in designing another form of quantum computer that uses chips built with “qubits,” or quantum bits. Qubits can exist in both zero and one states at the same time due to a phenomenon called superposition. 

However, qubits are subject to external noise from the environment that can affect their performance, said Dr. Paolo Molignini, a researcher in theoretical quantum physics at Stockholm University.

“Because qubits are in a superposition of zero and one, they are very prone to errors and they are very prone to what is called decoherence, which means there could be noise, thermal fluctuations or many things that can collapse the state of the qubits,” Molignini told Salon in a video call. “Then you basically lose all of the information that you were encoding.”

In December [2024], Google said its quantum computer could perform a calculation that a standard computer could complete in 10 septillion years — a period far longer than the age of the universe — in just under five minutes.

However, a general-purpose computer would require billions of qubits, so these approaches are still a far cry from having practical applications, said Dr. Patrick Lee, a physicist at the Massachusetts Institute of Technology [MIT], who co-authored the report leading to the 2018 Nature paper’s retraction.

Microsoft is taking a different approach to quantum computing by trying to develop  a topological qubit, which has the ability to store information in multiple places at once. Topological qubits exist within the Majorana zero states and are appealing because they can theoretically offer greater protection against environmental noise that destroys information within a quantum system.

Think of it like an arrow, where the arrowhead holds a portion of the information and the arrow tail holds the rest, Lee said. Distributing information across space like this is called topological protection.

“If you are able to put them far apart from each other, then you have a chance of maintaining the identity of the arrow even if it is subject to noise,” Lee told Salon in a phone interview. “The idea is that if the noise affects the head, it doesn’t kill the arrow and if it affects only the tail it doesn’t kill your arrow. It has to affect both sides simultaneously to kill your arrow, and that is very unlikely if you are able to put them apart.”

… Lee believes that even if the data doesn’t entirely prove that topological qubits exist in the Majorana zero-state, it still represents a scientific advancement. But he noted that several important issues need to be solved before it has practical implications. For one, the coherence time of these particles — or how long they can exist without being affected by environmental noise — is still very short, he explained.

“They make a measurement, come back, and the qubit has changed, so you have lost your coherence,” Lee said. “With this very short time, you cannot do anything with it.”

“I just wish they [Microsoft] were a bit more careful with their claims because I fear that if they don’t measure up to what they are saying, there might be a backlash at some point where people say, ‘You promised us all these fancy things and where are they now?’” Molignini said. “That might damage the entire quantum community, not just themselves.”

Iif you have the time, please read Hlavinka’s March 17, 2025 article in its entirety .

D-Wave Quantum Systems claims quantum supremacy over real world problem solution

A March 15, 2025 article by Bob Yirka for phys.org announces the news from D-Wave Quantum Systems. Note: The company, which had its headquarters in Canada (Burnaby, BC) now seems to be a largely US company with its main headquarters in Palo Alto, California and an ancillary or junior (?) headquarters in Canada, Note: A link has been removed,

A team of quantum computer researchers at quantum computer maker D-Wave, working with an international team of physicists and engineers, is claiming that its latest quantum processor has been used to run a quantum simulation faster than could be done with a classical computer.

In their paper published in the journal Science, the group describes how they ran a quantum version of a mathematical approximation regarding how matter behaves when it changes states, such as from a gas to a liquid—in a way that they claim would be nearly impossible to conduct on a traditional computer.

Here’s a March 12, 2025 D-Wave Systems (now D-Wave Quantum Systems) news release touting its real world problem solving quantum supremacy,

New landmark peer-reviewed paper published in Science, “Beyond-Classical Computation in Quantum Simulation,” unequivocally validates D-Wave’s achievement of the world’s first and only demonstration of quantum computational supremacy on a useful, real-world problem

Research shows D-Wave annealing quantum computer performs magnetic materials simulation in minutes that would take nearly one million years and more than the world’s annual electricity consumption to solve using a classical supercomputer built with GPU clusters

D-Wave Advantage2 annealing quantum computer prototype used in supremacy achievement, a testament to the system’s remarkable performance capabilities

PALO ALTO, Calif. – March 12, 2025 – D-Wave Quantum Inc. (NYSE: QBTS) (“D-Wave” or the “Company”), a leader in quantum computing systems, software, and services and the world’s first commercial supplier of quantum computers, today announced a scientific breakthrough published in the esteemed journal Science, confirming that its annealing quantum computer outperformed one of the world’s most powerful classical supercomputers in solving complex magnetic materials simulation problems with relevance to materials discovery. The new landmark peer-reviewed paper, Beyond-Classical Computation in Quantum Simulation,” validates this achievement as the world’s first and only demonstration of quantum computational supremacy on a useful problem.

An international collaboration of scientists led by D-Wave performed simulations of quantum dynamics in programmable spin glasses—computationally hard magnetic materials simulation problems with known applications to business and science—on both D-Wave’s Advantage2TM prototype annealing quantum computer and the Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory. The work simulated the behavior of a suite of lattice structures and sizes across a variety of evolution times and delivered a multiplicity of important material properties. D-Wave’s quantum computer performed the most complex simulation in minutes and with a level of accuracy that would take nearly one million years using the supercomputer. In addition, it would require more than the world’s annual electricity consumption to solve this problem using the supercomputer, which is built with graphics processing unit (GPU) clusters.

“This is a remarkable day for quantum computing. Our demonstration of quantum computational supremacy on a useful problem is an industry first. All other claims of quantum systems outperforming classical computers have been disputed or involved random number generation of no practical value,” said Dr. Alan Baratz, CEO of D-Wave. “Our achievement shows, without question, that D-Wave’s annealing quantum computers are now capable of solving useful problems beyond the reach of the world’s most powerful supercomputers. We are thrilled that D-Wave customers can use this technology today to realize tangible value from annealing quantum computers.”

Realizing an Industry-First Quantum Computing Milestone
The behavior of materials is governed by the laws of quantum physics. Understanding the quantum nature of magnetic materials is crucial to finding new ways to use them for technological advancement, making materials simulation and discovery a vital area of research for D-Wave and the broader scientific community. Magnetic materials simulations, like those conducted in this work, use computer models to study how tiny particles not visible to the human eye react to external factors. Magnetic materials are widely used in medical imaging, electronics, superconductors, electrical networks, sensors, and motors.

“This research proves that D-Wave’s quantum computers can reliably solve quantum dynamics problems that could lead to discovery of new materials,” said Dr. Andrew King, senior distinguished scientist at D-Wave. “Through D-Wave’s technology, we can create and manipulate programmable quantum matter in ways that were impossible even a few years ago.”

Materials discovery is a computationally complex, energy-intensive and expensive task. Today’s supercomputers and high-performance computing (HPC) centers, which are built with tens of thousands of GPUs, do not always have the computational processing power to conduct complex materials simulations in a timely or energy-efficient manner. For decades, scientists have aspired to build a quantum computer capable of solving complex materials simulation problems beyond the reach of classical computers. D-Wave’s advancements in quantum hardware have made it possible for its annealing quantum computers to process these types of problems for the first time.

“This is a significant milestone made possible through over 25 years of research and hardware development at D-Wave, two years of collaboration across 11 institutions worldwide, and more than 100,000 GPU and CPU hours of simulation on one of the world’s fastest supercomputers as well as computing clusters in collaborating institutions,” said Dr. Mohammad Amin, chief scientist at D-Wave. “Besides realizing Richard Feynman’s vision of simulating nature on a quantum computer, this research could open new frontiers for scientific discovery and quantum application development.” 

Advantage2 System Demonstrates Powerful Performance Gains
The results shown in “Beyond-Classical Computation in Quantum Simulation” were enabled by D-Wave’s previous scientific milestones published in Nature Physics (2022) and Nature (2023), which theoretically and experimentally showed that quantum annealing provides a quantum speedup in complex optimization problems. These scientific advancements led to the development of the Advantage2 prototype’s fast anneal feature, which played an essential role in performing the precise quantum calculations needed to demonstrate quantum computational supremacy.

“The broader quantum computing research and development community is collectively building an understanding of the types of computations for which quantum computing can overtake classical computing. This effort requires ongoing and rigorous experimentation,” said Dr. Trevor Lanting, chief development officer at D-Wave. “This work is an important step toward sharpening that understanding, with clear evidence of where our quantum computer was able to outperform classical methods. We believe that the ability to recreate the entire suite of results we produced is not possible classically. We encourage our peers in academia to continue efforts to further define the line between quantum and classical capabilities, and we believe these efforts will help drive the development of ever more powerful quantum computing technology.”

The Advantage2 prototype used to achieve quantum computational supremacy is available for customers to use today via D-Wave’s Leap™ real-time quantum cloud service. The prototype provides substantial performance improvements from previous-generation Advantage systems, including increased qubit coherence, connectivity, and energy scale, which enables higher-quality solutions to larger, more complex problems. Moreover, D-Wave now has an Advantage2 processor that is four times larger than the prototype used in this work and has extended the simulations of this paper from hundreds of qubits to thousands of qubits, which are significantly larger than those described in this paper.

Leading Industry Voices Echo Support
Dr. Hidetoshi Nishimori, Professor, Department of Physics, Tokyo Institute of Technology:
“This paper marks a significant milestone in demonstrating the real-world applicability of large-scale quantum computing. Through rigorous benchmarking of quantum annealers against state-of-the-art classical methods, it convincingly establishes a quantum advantage in tackling practical problems, revealing the transformative potential of quantum computing at an unprecedented scale.”

Dr. Seth Lloyd, Professor of Quantum Mechanical Engineering, MIT:
Although large-scale, fully error-corrected quantum computers are years in the future, quantum annealers can probe the features of quantum systems today. In an elegant paper, the D-Wave group has used a large-scale quantum annealer to uncover patterns of entanglement in a complex quantum system that lie far beyond the reach of the most powerful classical computer. The D-Wave result shows the promise of quantum annealers for exploring exotic quantum effects in a wide variety of systems.”

Dr. Travis Humble, Director of Quantum Science Center, Distinguished Scientist at Oak Ridge National Laboratory:
“ORNL seeks to expand the frontiers of computation through many different avenues, and benchmarking quantum computing for materials science applications provides critical input to our understanding of new computational capabilities.”

Dr. Juan Carrasquilla, Associate Professor at the Department of Physics, ETH Zürich:
“I believe these results mark a critical scientific milestone for D-Wave. They also serve as an invitation to the scientific community, as these results offer a strong benchmark and motivation for developing novel simulation techniques for out-of-equilibrium dynamics in quantum many-body physics. Furthermore, I hope these findings encourage theoretical exploration of the computational challenges involved in performing such simulations, both classically and quantum-mechanically.”

Dr. Victor Martin-Mayor, Professor of Theoretical Physics, Universidad Complutense de Madrid:
“This paper is not only a tour-de-force for experimental physics, it is also remarkable for the clarity of the results. The authors have addressed a problem that is regarded both as important and as very challenging to a classical computer. The team has shown that their quantum annealer performs better at this task than the state-of-the-art methods for classical simulation.”

Dr. Alberto Nocera, Senior Staff Scientist, The University of British Columbia:
“Our work shows the impracticability of state-of-the-art classical simulations to simulate the dynamics of quantum magnets, opening the door for quantum technologies based on analog simulators to solve scientific questions that may otherwise remain unanswered using conventional computers.”

About D-Wave Quantum Inc.
D-Wave is a leader in the development and delivery of quantum computing systems, software, and services. We are the world’s first commercial supplier of quantum computers, and the only company building both annealing and gate-model quantum computers. Our mission is to help customers realize the value of quantum, today. Our 5,000+ qubit Advantage™ quantum computers, the world’s largest, are available on-premises or via the cloud, supported by 99.9% availability and uptime. More than 100 organizations trust D-Wave with their toughest computational challenges. With over 200 million problems submitted to our Advantage systems and Advantage2™ prototypes to date, our customers apply our technology to address use cases spanning optimization, artificial intelligence, research and more. Learn more about realizing the value of quantum computing today and how we’re shaping the quantum-driven industrial and societal advancements of tomorrow: www.dwavequantum.com.

Forward-Looking Statements
Certain statements in this press release are forward-looking, as defined in the Private Securities Litigation Reform Act of 1995. These statements involve risks, uncertainties, and other factors that may cause actual results to differ materially from the information expressed or implied by these forward-looking statements and may not be indicative of future results. These forward-looking statements are subject to a number of risks and uncertainties, including, among others, various factors beyond management’s control, including the risks set forth under the heading “Risk Factors” discussed under the caption “Item 1A. Risk Factors” in Part I of our most recent Annual Report on Form 10-K or any updates discussed under the caption “Item 1A. Risk Factors” in Part II of our Quarterly Reports on Form 10-Q and in our other filings with the SEC. Undue reliance should not be placed on the forward-looking statements in this press release in making an investment decision, which are based on information available to us on the date hereof. We undertake no duty to update this information unless required by law.

Here’s a link to and a citation for the most recent paper,

Beyond-classical computation in quantum simulation by Andrew D. King , Alberto Nocera, Marek M. Rams, Jacek Dziarmaga, Roeland Wiersema, William Bernoudy, Jack Raymond, Nitin Kaushal, Niclas Heinsdorf, Richard Harris, Kelly Boothby, Fabio Altomare, Mohsen Asad, Andrew J. Berkley, Martin Boschnak, Kevin Chern, Holly Christiani, Samantha Cibere, Jake Connor, Martin H. Dehn, Rahul Deshpande, Sara Ejtemaee, Pau Farre, Kelsey Hamer, Emile Hoskinson, Shuiyuan Huang, Mark W. Johnson, Samuel Kortas, Eric Ladizinsky, Trevor Lanting, Tony Lai, Ryan Li, Allison J. R. MacDonald, Gaelen Marsden, Catherine C. McGeoch, Reza Molavi, Travis Oh, Richard Neufeld, Mana Norouzpour, Joel Pasvolsky, Patrick Poitras, Gabriel Poulin-Lamarre, Thomas Prescott, Mauricio Reis, Chris Rich, Mohammad Samani, Benjamin Sheldan, Anatoly Smirnov, Edward Sterpka, Berta Trullas Clavera, Nicholas Tsai, Mark Volkmann, Alexander M. Whiticar, Jed D. Whittaker, Warren Wilkinson, Jason Yao, T.J. Yi, Anders W. Sandvik, Gonzalo Alvarez, Roger G. Melko, Juan Carrasquilla, Marcel Franz, and Mohammad H. Amin. Science 12 Mar 2025 First Release DOI: 10.1126/science.ado6285

This paper appears to be open access.Note: I usually tag all of the authors but not this time either.

A controversy of sorts

Madison McLauchlan’s March 19, 2025 article for Betakit (website for Canadian Startup News & Tech Innovation), Note: Links have been removed,

Canadian-born company D-Wave Quantum Systems said it achieved “quantum supremacy” last week after publishing what it calls a groundbreaking paper in the prestigious journal Science. Despite the lofty term, Canadian experts say supremacy is not the be-all, end-all of quantum innovation. 

D-Wave, which has labs in Palo Alto, Calif., and Burnaby, BC, claimed in a statement that it has shown “the world’s first and only demonstration of quantum computational supremacy on a useful, real-world problem.”

Coined in the early 2010s by physicist John Preskill, quantum supremacy is the ability of a quantum computing system to solve a problem no classical computer can in a feasible amount of time. The metric makes no mention of whether the problem needs to be useful or relevant to real life. Google researchers published a paper in Nature in 2019 claiming they cleared that bar with the Sycamore quantum processor. Researchers at the University of Science and Technology in China claimed they demonstrated quantum supremacy several times. 

D-Wave’s attempt differs in that its researchers aimed to solve a real-world materials-simulation problem with quantum computing—one the company claims would be nearly impossible for a traditional computer to solve in a reasonable amount of time. D-Wave used an annealing designed to solve optimization problems. The problem is represented like an energy space, where the “lowest energy state” corresponds to the solution. 

While exciting, quantum supremacy is just one metric among several that mark the progress toward widely useful quantum computers, industry experts told BetaKit. 

“It is a very important and mostly academic metric, but certainly not the most important in the grand scheme of things, as it doesn’t take into account the usefulness of the algorithm,” said Martin Laforest, managing partner at Quantacet, a specialized venture capital fund for quantum startups. 

He added that Google and Xanadu’s [Xanadu Quantum Technologies based in Toronto, Canada] past claims to quantum supremacy were “extraordinary pieces of work, but didn’t unlock practicality.” 

Laforest, along with executives at Canadian quantum startups Nord Quantique and Photonic, say that the milestones of ‘quantum utility’ or ‘quantum advantage’ may be more important than supremacy. 

According to Quantum computing company Quera [QuEra?], quantum advantage is the demonstration of a quantum algorithm solving a real-world problem on a quantum computer faster than any classical algorithm running on any classical computer. On the other hand, quantum utility, according to IBM, refers to when a quantum computer is able to perform reliable computations at a scale beyond brute-force classical computing methods that provide exact solutions to computational problems. 

Error correction hasn’t traditionally been considered a requirement for quantum supremacy, but Laforest told BetaKit the term is “an ever-moving target, constantly challenged by advances in classical algorithms.” He added: “In my opinion, some level of supremacy or utility may be possible in niche areas without error correction, but true disruption requires it.”

Paul Terry, CEO of Vancouver-based Photonic, thinks that though D-Wave’s claim to quantum supremacy shows “continued progress to real value,” scalability is the industry’s biggest hurdle to overcome.

But as with many milestone claims in the quantum space, D-Wave’s latest innovation has been met with scrutiny from industry competitors and researchers on the breakthrough’s significance, claiming that classical computers have achieved similar results. Laforest echoed this sentiment.

“Personally, I wouldn’t say it’s an unequivocal demonstration of supremacy, but it is a damn nice experiment that once again shows the murky zone between traditional computing and early quantum advantage,” Laforest said.

Originally founded out of the University of British Columbia, D-Wave went public on the New York Stock Exchange just over two years ago through a merger with a special-purpose acquisition company in 2022. D-Wave became a Delaware-domiciled corporation as part of the deal.

Earlier this year, D-Wave’s stock price dropped after Nvidia CEO Jensen Huang publicly stated that he estimated that useful quantum computers were more than 15 years away. D-Wave’s stock price, which had been struggling, has seen a considerable bump in recent months alongside a broader boost in the quantum market. The price popped after its most recent earnings, shared right after its quantum supremacy announcement. 

The beat goes on

Some of this is standard in science. There’s always a debate over big claims and it’s not unusual for people to get over excited and have to make a retraction. Scientists are people too. That said, there’s a lot of money on the line and that appears to be making situation even more volatile than usual.

That last paragraph was completed on the morning of March 21, 2025 and later that afternoon I came across this March 21, 2025 article by Michael Grothaus for Fast Company, Note: Links have been removed,

Quantum computing stocks got pummeled yesterday, with the four most prominent public quantum computing companies—IonQ, Rigetti Computing, Quantum Computing Inc., and D-Wave Quantum Inc.—falling anywhere from over 9% to over 18%. The reason? A lot of it may have to do with AI chip giant Nvidia. Again.

Stocks crash yesterday on Nvidia quantum news

Yesterday was a bit of a bloodbath on the stock market for the four most prominent publicly traded quantum computing companies. …

All four of these quantum computing stocks [IonQ, Inc.; Rigetti Computing, Inc.; Quantum Computing Inc.; D-Wave Quantum Inc.] tumbled on the day that AI chip giant Nvidia kicked off its two-day Quantum Day event. In a blog post from January 14 announcing Quantum Day, Nvidia said the event “brings together leading experts for a comprehensive and balanced perspective on what businesses should expect from quantum computing in the coming decades — mapping the path toward useful quantum applications.”

Besides bringing quantum experts together, the AI behemoth also announced that it will be launching a new quantum computing research center in Boston.

Called the NVIDIA Accelerated Quantum Research Center (NVAQC), the new research lab “will help solve quantum computing’s most challenging problems, ranging from qubit noise to transforming experimental quantum processors into practical devices,” the company said in a press release.

The NVAQC’s location in Boston means it will be near both Harvard University and the Massachusetts Institute of Technology (MIT). 

Before Nvidia’s announcement yesterday, IonQ, Rigetti, D-Wave, and Quantum Computing Inc. were the leaders in the nascent field of quantum computing. And while they still are right now (Nvidia’s quantum research lab hasn’t been built yet), the fear is that Nvidia could use its deep pockets to quickly buy its way into a leadership spot in the field. With its $2.9 trillion market cap, the company can easily afford to throw billions of research dollars into quantum computing.

As noted by the Motley Fool, the location of the NVIDIA Accelerated Quantum Research Center in Boston will also allow Nvidia to more easily tap into top quantum talent from Harvard and MIT—talent that may have otherwise gone to IonQ, Rigetti, D-Wave, and Quantum Computing Inc.

Nvidia’s announcement is a massive about-face from the company in regard to how it views quantum computing. It’s also the second time that Nvidia has caused quantum stocks to crash this year. Back in January, shares in prominent quantum computing companies fell after Huang said that practical use of quantum computing was decades away.

Those comments were something quantum computing company CEOs like D-Wave’s Alan Baratz took issue with. “It’s an egregious error on Mr. Huang’s part,” Bartaz told Fast Company at the time. “We’re not decades away from commercial quantum computers. They exist. There are companies that are using our quantum computer today.”

According to Investor’s Business Daily, Huang reportedly got the idea for Nvidia’s Quantum Day event after the blowback to his comments, inviting quantum computing executives to the event to explain why he was incorrect about quantum computing.

The word is volatile.

Using measurements to generate quantum entanglement and teleportation

Caption: The researchers at Google Quantum AI and Stanford University explored how measurements can fundamentally change the structure of quantum information in space-time. Credit: Google Quantum AI, designed by Sayo-Art

interesting approach to illustrating a complex scientific concept! This October 18, 2023 news item on phys.org describes the measurement problem,

Quantum mechanics is full of weird phenomena, but perhaps none as weird as the role measurement plays in the theory. Since a measurement tends to destroy the “quantumness” of a system, it seems to be the mysterious link between the quantum and classical world. And in a large system of quantum bits of information, known as “qubits,” the effect of measurements can induce dramatically new behavior, even driving the emergence of entirely new phases of quantum information.

This happens when two competing effects come to a head: interactions and measurement. In a quantum system, when the qubits interact with one another, their information becomes shared nonlocally in an “entangled state.” But if you measure the system, the entanglement is destroyed. The battle between measurement and interactions leads to two distinct phases: one where interactions dominate and entanglement is widespread, and one where measurements dominate, and entanglement is suppressed.

An October 18, 2023 Google Quantum AI news release, which originated the news item, on EurekAlert provides more information about a research collaboration between Google and Stanford University,

As reported today [October 18, 2023] in the journal Nature, researchers at Google Quantum AI and Stanford University have observed the crossover between these two regimes — known as a “measurement-induced phase transition” — in a system of up to 70 qubits. This is by far the largest system in which measurement-induced effects have been explored. The researchers also saw signatures of a novel form of “quantum teleportation” — in which an unknown quantum state is transferred from one set of qubits to another — that emerges as a result of these measurements. These studies could help inspire new techniques useful for quantum computing.

One can visualize the entanglement in a system of qubits as an intricate web of connections. When we measure an entangled system, the impact it has on the web depends on the strength of the measurement. It could destroy the web completely, or it could snip and prune selected strands of the web, but leave others intact. 

To actually see this web of entanglement in an experiment is notoriously challenging. The web itself is invisible, so researchers can only infer its existence by seeing statistical correlations between the measurement outcomes of qubits. Many, many runs of the same experiment are needed to infer the pattern of the web. This and other challenges have plagued past experiments and limited the study of measurement-induced phase transitions to very small system sizes. 

To address these challenges, the researchers used a few experimental sleights of hand. First, they rearranged the order of operations so that all the measurements could be made at the end of the experiment, rather than interleaved throughout, thus reducing the complexity of the experiment. Second, they developed a new way to measure certain features of the web with a single “probe” qubit. In this way, they could learn more about the entanglement web from fewer runs of the experiment than had been previously required. Finally, the probe, like all qubits, was susceptible to unwanted noise in the environment. This is normally seen as a bad thing, as noise can disrupt quantum calculations, but the researchers turned this bug into a feature by noting that the probe’s sensitivity to noise depended on the nature of the entanglement web around it. They could therefore use the probe’s noise sensitivity to infer the entanglement of the whole system.

The team first looked at this difference in sensitivity to noise in the two entanglement regimes and found distinctly different behaviors. When measurements dominated over interactions (the “disentangling phase”), the strands of the web remained relatively short. The probe qubit was only sensitive to the noise of its nearest qubits. In contrast, when the measurements were weaker and entanglement was more widespread (the “entangling phase”) the probe was sensitive to noise throughout the entire system. The crossover between these two sharply contrasting behaviors is a signature of the sought-after measurement-induced phase transition.

The team also demonstrated a novel form of quantum teleportation that emerged naturally from the measurements: by measuring all but two distant qubits in a weakly entangled state, stronger entanglement was generated between those two distant qubits. The ability to generate measurement-induced entanglement across long distances enables the teleportation observed in the experiment.

The stability of entanglement against measurements in the entangling phase could inspire new schemes to make quantum computing more robust to noise. The role that measurements play in driving new phases and physical phenomena is also of fundamental interest to physicists. Stanford professor and co-author of the study, Vedika Khemani, says, “Incorporating measurements into dynamics introduces a whole new playground for many-body physics where many fascinating and new types of non-equilibrium phases could be found. We explore a few of these striking and counter-intuitive measurement induced phenomena in this work, but there is much more richness to be discovered in the future.” 

Before getting to the citation for and link to the paper, I have an interview with some of the researchers that was written up by Holly Alyssa MacCormick (Associate Director of Public Relations. Science writer and news editor for Stanford School of Humanities and Sciences) in an October 18, 2023 article for Stanford University, Note 1: Some of this will be redundant; Note 2: Links have been removed,

Harnessing the “weirdness” of quantum mechanics to solve practical problems is the long-standing promise of quantum computing. But much like the state of the cat in Erwin Schrödinger’s famous thought experiment, quantum mechanics is still a box of unknowns. Similar to the solid, liquid, and gas phases of matter, the organization of quantum information, too, can assume different phases. Yet unlike the phases of matter we are familiar with in everyday life, the phases of quantum information are much harder to formulate and observe and as a result have been only a theoretical dream until recently.

Measurements are arguably the weirdest facet of quantum mechanics. Intuition tells us that a state has some definite property and measurement reveals that property. However, measurements in quantum mechanics produce intrinsically random results, and the act of measurement irreversibly changes the state itself. Unlike laptops, smartphones, and other classical computers that rely on binary “bits” to code in the state of 0 (off) or 1 (on), quantum computers use “qubits” of information that can be in the state of 0, 1, or 0 and 1 at the same time, a concept known as superposition. The act of measurement doesn’t just extract information, but also changes the state, randomly “collapsing” a superposition into a specific value (0 or 1).

Moreover, this collapse affects not just the qubit that was measured, but also potentially the entire system—an effect described by Einstein as “spooky action at a distance.” This is due to “entanglement,” a quantum property that allows multiple particles in different places to jointly be in superposition, which is a key ingredient for quantum computing. The collapse of an entangled state can also enable spooky phenomena such as “teleportation,” thereby irretrievably altering the “arrow of time” (the concept that time moves in one forward direction) that governs our everyday experience.

In other words, measurements can be used to fundamentally reorganize the structure of quantum information in space and time.

Now, a new collaboration between Stanford and Google Quantum AI investigates the effect of measurements on quantum systems of many particles on Google’s quantum computer and has obtained the largest experimental demonstration of novel measurement-induced phases of quantum information to date. The study was co-led by Jesse Hoke, a physics graduate student and fellow at Stanford’s Quantum Science and Engineering initiative (Q-FARM), Matteo Ippoliti, a former postdoctoral scholar in the Department of Physics, and senior author Vedika Khemani, associate professor of physics at the Stanford School of Humanities and Sciences and Q-FARM. Their results were published Oct. 18 in the journal Nature.

Here, Hoke, Ippoliti, and Khemani discuss how they observed measurement-induced phases of quantum information—a feat once thought to be beyond the realm of what could be achieved in an experiment—and how their new insights could help pave the way for advancements in quantum science and engineering.

Question: What distinguishes the phases investigated in this study from one another, and what is teleportation?

Ippoliti: In the simplest case, there are two phases. In one phase, the structure of quantum information in the system forms a strongly connected web where qubits share a lot of entanglement, even at large spatial distances and/or temporal separations. In the other, the system is weakly connected, so correlations like entanglement decay quickly with distance or time. These are the two phases that we probed in our experiment. The strongly entangled phase enables teleportation, which occurs when the state of one qubit is instantly transmitted, or “teleported,” to another far away qubit by measuring all but those two qubits.

Question: How did you control when a phase transition occurred

Khemani: The competing forces at play are the interactions between qubits, which tend to build entanglement, and measurements of the qubits, which can destroy it. This is the famous “wave function collapse” of quantum mechanics—think of Schrödinger’s cat “collapsing” into one of two states (dead or alive) when we open the box. However, because of entanglement, the collapse is not restricted to the qubit we directly measure but affects the rest of the system too. By controlling the strength or frequency of measurements on the quantum computer, we can induce a phase transition between an entangled phase and a disentangled one.

Question: What were some of the challenges your team needed to overcome to measure quantum states, and how did you do it?

Ippoliti: Measurements in quantum mechanics are inherently random, which makes observing these phases notoriously challenging. This is because every repetition of our experiment produces a different, random-looking quantum state. This is a problem because detecting entanglement (the feature that sets our two phases apart) requires observations on many copies of the same state. To get around this difficulty, we developed a diagnostic that cross-correlates data from the quantum processor with the results of simulations on classical computers. This hybrid quantum-classical diagnostic allowed us to see evidence of the different phases on up to 70 qubits, making this one of the largest digital quantum simulations and experiments to date.

Hoke: Another challenge was that quantum experiments are currently limited by environmental noise. Entanglement is a delicate resource that is easily destroyed by interactions from the outside environment, which is the primary challenge in quantum computing. In our setup, we probe the entanglement structure between the system’s qubits, which is destroyed if the system is not perfectly isolated and instead gets entangled with the surrounding environment. We addressed this challenge by devising a diagnostic that uses noise as a feature rather than a bug—the two phases (weak and strong entanglement) respond to noise in different ways, and we used this as a probe of the phases.

Khemani: In addition, we used the fact that the “arrow of time” loses meaning with measurement-induced teleportation. This allowed us to reorganize the sequence of operations on the quantum computer in advantageous ways to mitigate the effects of noise and to devise new probes of the organization of quantum information in space-time.

Question: What do the findings mean?

Khemani: At the level of fundamental science, our experiments demonstrate new phenomena that extend our familiar concepts of “phase structure.” Instead of thinking of measurements merely as probes, we are now thinking of them as an intrinsic part of quantum dynamics, which can be used to create and manipulate novel quantum correlations. At the level of applications, using measurements to robustly generate structured entanglement is inspiring new ways to make quantum computing more robust against noise. More generally, our understanding of general phases of quantum information and dynamics is still nascent, and many exciting surprises await.

Acknowledgements

Hoke conducted research on this study while working as an intern at Google Quantum AI under the supervision of Xiao Mi and Pedram Roushan. Ippoliti is now an assistant professor of physics at the University of Texas at Austin. Additional co-authors on this study include the Google Quantum AI team and researchers from the University of Massachusetts, Amherst; Auburn University; University of Technology, Sydney; University of California, Riverside; and Columbia University. The full list of authors is available in the Nature paper.

Ippoliti was funded in part by the Gordon and Betty Moore Foundation’s EPiQS Initiative. Khemani was funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences; the Alfred P. Sloan Foundation; and the Packard Foundation.

Here’s a link to and a citation for the paper, Note: There are well over 100 contributors to the paper and I have not listed each one separately, You can find the list if you go to the Nature paper and click on Google Quantum AI and Collaborators in the author field,

Measurement-induced entanglement and teleportation on a noisy quantum processor by Google Quantum AI and Collaborators. Nature volume 622, pages 481–486 (2023) DOI: https://doi.org/10.1038/s41586-023-06505-7 Published online: 18 October 2023 Issue Date: 19 October 2023

This paper is open access.

A jellybean solution to a problem with quantum computing chips

https://youtube.com/watch?v=BDeNF0IRfvE

A May 11, 2023 news item on phys.org heralds this new development, Note: A link has been removed,

The silicon microchips of future quantum computers will be packed with millions, if not billions of qubits—the basic units of quantum information—to solve the greatest problems facing humanity. And with millions of qubits needing millions of wires in the microchip circuitry, it was always going to get cramped in there.

But now engineers at UNSW [University of New South Wales] Sydney have made an important step toward solving a long-standing problem about giving their qubits more breathing space—and it all revolves around jellybeans.

Not the kind we rely on for a sugar hit to get us past the 3pm slump. But jellybean quantum dots –elongated areas between qubit pairs that create more space for wiring without interrupting the way the paired qubits interact with each other.

A May 10, 2023 University of New South Wales (UNSW) press release (also published on EurekAlert), which originated the news item, delves further into the ‘jellbean solution’, Note: A link has been removed,

As lead author Associate Professor Arne Laucht explains, the jellybean quantum dot is not a new concept in quantum computing, and has been discussed as a solution to some of the many pathways towards building the world’s first working quantum computer.

“It has been shown in different material systems such as gallium arsenide. But it has not been shown in silicon before,” he says.

Silicon is arguably one of the most important materials in quantum computing, A/Prof. Laucht says, as the infrastructure to produce future quantum computing chips is already available, given we use silicon chips in classical computers. Another benefit is that you can fit so many qubits (in the form of electrons) on the one chip.

“But because the qubits need to be so close together to share information with one another, placing wires between each pair was always going to be a challenge.”

In a study published today in Advanced Materials, the UNSW team of engineers describe how they showed in the lab that jellybean quantum dots were possible in silicon. This now opens the way for qubits to be spaced apart to ensure that the wires necessary to connect and control the qubits can be fit in between.

How it works

In a normal quantum dot using spin qubits, single electrons are pulled from a pool of electrons in silicon to sit under a ‘quantum gate’ – where the spin of each electron represents the computational state. For example, spin up may represent a 0 and spin down could represent a 1. Each qubit can then be controlled by an oscillating magnetic field of microwave frequency.

But to implement a quantum algorithm, we also need two-qubit gates, where the control of one qubit is conditional on the state of the other. For this to work, both quantum dots need to be placed very closely, just a few 10s of nanometres apart so their spins can interact with one another. (To put this in perspective, a single human hair is about 100,000 nanometres thick.)

But moving them further apart to create more real estate for wiring has always been the challenge facing scientists and engineers. The problem was as the paired qubits move apart, they would then stop interacting.

The jellybean solution represents a way of having both: nicely spaced qubits that continue to influence one another. To make the jellybean, the engineers found a way to create a chain of electrons by trapping more electrons in between the qubits. This acts as the quantum version of a string phone so that the two paired qubit electrons at each end of the jellybean can continue to talk to another. Only the electrons at each end are involved in any computations, while the electrons in the jellybean dot are there to keep them interacting while spread apart.

The lead author of the paper, former PhD student Zeheng Wang says the number of extra electrons pulled into the jellybean quantum dot is key to how they arrange themselves.

“We showed in the paper that if you only load a few electrons in that puddle of electrons that you have underneath, they break into smaller puddles. So it’s not one continuous jellybean quantum dot, it’s a smaller one here, and a bigger one in the middle and a smaller one there. We’re talking of a total of three to maybe ten electrons.

“It’s only when you go to larger numbers of electrons, say 15 or 20 electrons, that the jellybean becomes more continuous and homogeneous. And that’s where you have your well-defined spin and quantum states that you can use to couple qubits to another.”

Post-jellybean quantum world

A/Prof. Laucht stresses that there is still much work to be done. The team’s efforts for this paper focused on proving the jellybean quantum dot is possible. The next step is to insert working qubits at each end of the jellybean quantum dot and make them talk to another.

“It is great to see this work realised. It boosts our confidence that jellybean couplers can be utilised in silicon quantum computers, and we are excited to try implementing them with qubits next.”

Whoever wrote the press release seems to have had a lot of fun with the jellybeans. Thank you.

Here’s a link to and a citation for the paper,

Jellybean Quantum Dots in Silicon for Qubit Coupling and On-Chip Quantum Chemistry by Zeheng Wang, MengKe Feng, Santiago Serrano, William Gilbert, Ross C. C. Leon, Tuomo Tanttu, Philip Mai, Dylan Liang, Jonathan Y. Huang, Yue Su, Wee Han Lim, Fay E. Hudson,  Christopher C. Escott, Andrea Morello, Chih Hwan Yang, Andrew S. Dzurak, Andre Saraiva, Arne Laucht. Advanced Materials Volume 35, Issue 19 May 11, 2023 2208557 DOI: https://doi.org/10.1002/adma.202208557 First published online: 20 February 2023

This paper is open access.

Graphene can be used in quantum components

A November 3, 2022 news item on phys.org provides a brief history of graphene before announcing the latest work from ETH Zurich,

Less than 20 years ago, Konstantin Novoselov and Andre Geim first created two-dimensional crystals consisting of just one layer of carbon atoms. Known as graphene, this material has had quite a career since then.

Due to its exceptional strength, graphene is used today to reinforce products such as tennis rackets, car tires or aircraft wings. But it is also an interesting subject for fundamental research, as physicists keep discovering new, astonishing phenomena that have not been observed in other materials.

The right twist

Bilayer graphene crystals, in which the two atomic layers are slightly rotated relative to each other, are particularly interesting for researchers. About one year ago, a team of researchers led by Klaus Ensslin and Thomas Ihn at ETH Zurich’s Laboratory for Solid State Physics was able to demonstrate that twisted graphene could be used to create Josephson junctions, the fundamental building blocks of superconducting devices.

Based on this work, researchers were now able to produce the first superconducting quantum interference device, or SQUID, from twisted graphene for the purpose of demonstrating the interference of superconducting quasiparticles. Conventional SQUIDs are already being used, for instance in medicine, geology and archaeology. Their sensitive sensors are capable of measuring even the smallest changes in magnetic fields. However, SQUIDs work only in conjunction with superconducting materials, so they require cooling with liquid helium or nitrogen when in operation.

In quantum technology, SQUIDs can host quantum bits (qubits); that is, as elements for carrying out quantum operations. “SQUIDs are to superconductivity what transistors are to semiconductor technology—the fundamental building blocks for more complex circuits,” Ensslin explains.

A November 3, 2022 ETH Zurich news release by Felix Würsten, which originated the news item, delves further into the work,

The spectrum is widening

The graphene SQUIDs created by doctoral student Elías Portolés are not more sensitive than their conventional counterparts made from aluminium and also have to be cooled down to temperatures lower than 2 degrees above absolute zero. “So it’s not a breakthrough for SQUID technology as such,” Ensslin says. However, it does broaden graphene’s application spectrum significantly. “Five years ago, we were already able to show that graphene could be used to build single-electron transistors. Now we’ve added superconductivity,” Ensslin says.

What is remarkable is that the graphene’s behaviour can be controlled in a targeted manner by biasing an electrode. Depending on the voltage applied, the material can be insulating, conducting or superconducting. “The rich spectrum of opportunities offered by solid-state physics is at our disposal,” Ensslin says.

Also interesting is that the two fundamental building blocks of a semiconductor (transistor) and a superconductor (SQUID) can now be combined in a single material. This makes it possible to build novel control operations. “Normally, the transistor is made from silicon and the SQUID from aluminium,” Ensslin says. “These are different materials requiring different processing technologies.”

An extremely challenging production process

Superconductivity in graphene was discovered by an MIT [Massachusetts Institute of Technology] research group five years ago, yet there are only a dozen or so experimental groups worldwide that look at this phenomenon. Even fewer are capable of converting superconducting graphene into a functioning component.

The challenge is that scientists have to carry out several delicate work steps one after the other: First, they have to align the graphene sheets at the exact right angle relative to each other. The next steps then include connecting electrodes and etching holes. If the graphene were to be heated up, as happens often during cleanroom processing, the two layers re-align the twist angle vanishes. “The entire standard semiconductor technology has to be readjusted, making this an extremely challenging job,” Portolés says.

The vision of hybrid systems

Ensslin is thinking one step ahead. Quite a variety of different qubit technologies are currently being assessed, each with its own advantages and disadvantages. For the most part, this is being done by various research groups within the National Center of Competence in Quantum Science and Technology (QSIT). If scientists succeed in coupling two of these systems using graphene, it might be possible to combine their benefits as well. “The result would be two different quantum systems on the same crystal,” Ensslin says.

This would also generate new possibilities for research on superconductivity. “With these components, we might be better able to understand how superconductivity in graphene comes about in the first place,” he adds. “All we know today is that there are different phases of superconductivity in this material, but we do not yet have a theoretical model to explain them.”

Here’s a link to and a citation for the paper,

A tunable monolithic SQUID in twisted bilayer graphene by Elías Portolés, Shuichi Iwakiri, Giulia Zheng, Peter Rickhaus, Takashi Taniguchi, Kenji Watanabe, Thomas Ihn, Klaus Ensslin & Folkert K. de Vries. Nature Nanotechnology volume 17, pages 1159–1164 (2022) Issue Date: November 2022 DOI: https://doi.org/10.1038/s41565-022-01222-0 Published online: 24 October 2022

This paper is behind a paywall.

Exotic magnetism: a quantum simulation from D-Wave Sytems

Vancouver (Canada) area company, D-Wave Systems is trumpeting itself (with good reason) again. This 2021 ‘milestone’ achievement builds on work from 2018 (see my August 23, 2018 posting for the earlier work). For me, the big excitement was finding the best explanation for quantum annealing and D-Wave’s quantum computers that I’ve seen yet (that explanation and a link to more is at the end of this posting).

A February 18, 2021 news item on phys.org announces the latest achievement,

D-Wave Systems Inc. today [February 18, 2021] published a milestone study in collaboration with scientists at Google, demonstrating a computational performance advantage, increasing with both simulation size and problem hardness, to over 3 million times that of corresponding classical methods. Notably, this work was achieved on a practical application with real-world implications, simulating the topological phenomena behind the 2016 Nobel Prize in Physics. This performance advantage, exhibited in a complex quantum simulation of materials, is a meaningful step in the journey toward applications advantage in quantum computing.

A February 18, 2021 D-Wave Systems press release (also on EurekAlert), which originated the news item, describes the work in more detail,

The work by scientists at D-Wave and Google also demonstrates that quantum effects can be harnessed to provide a computational advantage in D-Wave processors, at problem scale that requires thousands of qubits. Recent experiments performed on multiple D-Wave processors represent by far the largest quantum simulations carried out by existing quantum computers to date.

The paper, entitled “Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets”, was published in the journal Nature Communications (DOI 10.1038/s41467-021-20901-5, February 18, 2021). D-Wave researchers programmed the D-Wave 2000Q™ system to model a two-dimensional frustrated quantum magnet using artificial spins. The behavior of the magnet was described by the Nobel-prize winning work of theoretical physicists Vadim Berezinskii, J. Michael Kosterlitz and David Thouless. They predicted a new state of matter in the 1970s characterized by nontrivial topological properties. This new research is a continuation of previous breakthrough work published by D-Wave’s team in a 2018 Nature paper entitled “Observation of topological phenomena in a programmable lattice of 1,800 qubits” (Vol. 560, Issue 7719, August 22, 2018). In this latest paper, researchers from D-Wave, alongside contributors from Google, utilize D-Wave’s lower noise processor to achieve superior performance and glean insights into the dynamics of the processor never observed before.

“This work is the clearest evidence yet that quantum effects provide a computational advantage in D-Wave processors,” said Dr. Andrew King, principal investigator for this work at D-Wave. “Tying the magnet up into a topological knot and watching it escape has given us the first detailed look at dynamics that are normally too fast to observe. What we see is a huge benefit in absolute terms, with the scaling advantage in temperature and size that we would hope for. This simulation is a real problem that scientists have already attacked using the algorithms we compared against, marking a significant milestone and an important foundation for future development. This wouldn’t have been possible today without D-Wave’s lower noise processor.”

“The search for quantum advantage in computations is becoming increasingly lively because there are special problems where genuine progress is being made. These problems may appear somewhat contrived even to physicists, but in this paper from a collaboration between D-Wave Systems, Google, and Simon Fraser University [SFU], it appears that there is an advantage for quantum annealing using a special purpose processor over classical simulations for the more ‘practical’ problem of finding the equilibrium state of a particular quantum magnet,” said Prof. Dr. Gabriel Aeppli, professor of physics at ETH Zürich and EPF Lausanne, and head of the Photon Science Division of the Paul Scherrer Institute. “This comes as a surprise given the belief of many that quantum annealing has no intrinsic advantage over path integral Monte Carlo programs implemented on classical processors.”

“Nascent quantum technologies mature into practical tools only when they leave classical counterparts in the dust in solving real-world problems,” said Hidetoshi Nishimori, Professor, Institute of Innovative Research, Tokyo Institute of Technology. “A key step in this direction has been achieved in this paper by providing clear evidence of a scaling advantage of the quantum annealer over an impregnable classical computing competitor in simulating dynamical properties of a complex material. I send sincere applause to the team.”

“Successfully demonstrating such complex phenomena is, on its own, further proof of the programmability and flexibility of D-Wave’s quantum computer,” said D-Wave CEO Alan Baratz. “But perhaps even more important is the fact that this was not demonstrated on a synthetic or ‘trick’ problem. This was achieved on a real problem in physics against an industry-standard tool for simulation–a demonstration of the practical value of the D-Wave processor. We must always be doing two things: furthering the science and increasing the performance of our systems and technologies to help customers develop applications with real-world business value. This kind of scientific breakthrough from our team is in line with that mission and speaks to the emerging value that it’s possible to derive from quantum computing today.”

The scientific achievements presented in Nature Communications further underpin D-Wave’s ongoing work with world-class customers to develop over 250 early quantum computing applications, with a number piloting in production applications, in diverse industries such as manufacturing, logistics, pharmaceutical, life sciences, retail and financial services. In September 2020, D-Wave brought its next-generation Advantage™ quantum system to market via the Leap™ quantum cloud service. The system includes more than 5,000 qubits and 15-way qubit connectivity, as well as an expanded hybrid solver service capable of running business problems with up to one million variables. The combination of Advantage’s computing power and scale with the hybrid solver service gives businesses the ability to run performant, real-world quantum applications for the first time.

That last paragraph seems more sales pitch than research oriented. It’s not unexpected in a company’s press release but I was surprised that the editors at EurekAlert didn’t remove it.

Here’s a link to and a citation for the latest paper,

Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets by Andrew D. King, Jack Raymond, Trevor Lanting, Sergei V. Isakov, Masoud Mohseni, Gabriel Poulin-Lamarre, Sara Ejtemaee, William Bernoudy, Isil Ozfidan, Anatoly Yu. Smirnov, Mauricio Reis, Fabio Altomare, Michael Babcock, Catia Baron, Andrew J. Berkley, Kelly Boothby, Paul I. Bunyk, Holly Christiani, Colin Enderud, Bram Evert, Richard Harris, Emile Hoskinson, Shuiyuan Huang, Kais Jooya, Ali Khodabandelou, Nicolas Ladizinsky, Ryan Li, P. Aaron Lott, Allison J. R. MacDonald, Danica Marsden, Gaelen Marsden, Teresa Medina, Reza Molavi, Richard Neufeld, Mana Norouzpour, Travis Oh, Igor Pavlov, Ilya Perminov, Thomas Prescott, Chris Rich, Yuki Sato, Benjamin Sheldan, George Sterling, Loren J. Swenson, Nicholas Tsai, Mark H. Volkmann, Jed D. Whittaker, Warren Wilkinson, Jason Yao, Hartmut Neven, Jeremy P. Hilton, Eric Ladizinsky, Mark W. Johnson, Mohammad H. Amin. Nature Communications volume 12, Article number: 1113 (2021) DOI: https://doi.org/10.1038/s41467-021-20901-5 Published: 18 February 2021

This paper is open access.

Quantum annealing and more

Dr. Andrew King, one of the D-Wave researchers, has written a February 18, 2021 article on Medium explaining some of the work. I’ve excerpted one of King’s points,

Insight #1: We observed what actually goes on under the hood in the processor for the first time

Quantum annealing — the approach adopted by D-Wave from the beginning — involves setting up a simple but purely quantum initial state, and gradually reducing the “quantumness” until the system is purely classical. This takes on the order of a microsecond. If you do it right, the classical system represents a hard (NP-complete) computational problem, and the state has evolved to an optimal, or at least near-optimal, solution to that problem.

What happens at the beginning and end of the computation are about as simple as quantum computing gets. But the action in the middle is hard to get a handle on, both theoretically and experimentally. That’s one reason these experiments are so important: they provide high-fidelity measurements of the physical processes at the core of quantum annealing. Our 2018 Nature article introduced the same simulation, but without measuring computation time. To benchmark the experiment this time around, we needed lower-noise hardware (in this case, we used the D-Wave 2000Q lower noise quantum computer), and we needed, strangely, to slow the simulation down. Since the quantum simulation happens so fast, we actually had to make things harder. And we had to find a way to slow down both quantum and classical simulation in an equitable way. The solution? Topological obstruction.

If you have time and the inclination, I encourage you to read King’s piece.

Quantum supremacy

This supremacy, refers to an engineering milestone and a October 23, 2019 news item on ScienceDaily announces the milestone has been reached,

Researchers in UC [University of California] Santa Barbara/Google scientist John Martinis’ group have made good on their claim to quantum supremacy. Using 53 entangled quantum bits (“qubits”), their Sycamore computer has taken on — and solved — a problem considered intractable for classical computers.

An October 23, 2019 UC Santa Barbara news release (also on EurekAlert) by Sonia Fernandez, which originated the news item, delves further into the work,

“A computation that would take 10,000 years on a classical supercomputer took 200 seconds on our quantum computer,” said Brooks Foxen, a graduate student researcher in the Martinis Group. “It is likely that the classical simulation time, currently estimated at 10,000 years, will be reduced by improved classical hardware and algorithms, but, since we are currently 1.5 trillion times faster, we feel comfortable laying claim to this achievement.”

The feat is outlined in a paper in the journal Nature.

The milestone comes after roughly two decades of quantum computing research conducted by Martinis and his group, from the development of a single superconducting qubit to systems including architectures of 72 and, with Sycamore, 54 qubits (one didn’t perform) that take advantage of the both awe-inspiring and bizarre properties of quantum mechanics.

“The algorithm was chosen to emphasize the strengths of the quantum computer by leveraging the natural dynamics of the device,” said Ben Chiaro, another graduate student researcher in the Martinis Group. That is, the researchers wanted to test the computer’s ability to hold and rapidly manipulate a vast amount of complex, unstructured data.

“We basically wanted to produce an entangled state involving all of our qubits as quickly as we can,” Foxen said, “and so we settled on a sequence of operations that produced a complicated superposition state that, when measured, returns bitstring with a probability determined by the specific sequence of operations used to prepare that particular superposition. The exercise, which was to verify that the circuit’s output correspond to the equence used to prepare the state, sampled the quantum circuit a million times in just a few minutes, exploring all possibilities — before the system could lose its quantum coherence.

‘A complex superposition state’

“We performed a fixed set of operations that entangles 53 qubits into a complex superposition state,” Chiaro explained. “This superposition state encodes the probability distribution. For the quantum computer, preparing this superposition state is accomplished by applying a sequence of tens of control pulses to each qubit in a matter of microseconds. We can prepare and then sample from this distribution by measuring the qubits a million times in 200 seconds.”

“For classical computers, it is much more difficult to compute the outcome of these operations because it requires computing the probability of being in any one of the 2^53 possible states, where the 53 comes from the number of qubits — the exponential scaling is why people are interested in quantum computing to begin with,” Foxen said. “This is done by matrix multiplication, which is expensive for classical computers as the matrices become large.”

According to the new paper, the researchers used a method called cross-entropy benchmarking to compare the quantum circuit’s output (a “bitstring”) to its “corresponding ideal probability computed via simulation on a classical computer” to ascertain that the quantum computer was working correctly.

“We made a lot of design choices in the development of our processor that are really advantageous,” said Chiaro. Among these advantages, he said, are the ability to experimentally tune the parameters of the individual qubits as well as their interactions.

While the experiment was chosen as a proof-of-concept for the computer, the research has resulted in a very real and valuable tool: a certified random number generator. Useful in a variety of fields, random numbers can ensure that encrypted keys can’t be guessed, or that a sample from a larger population is truly representative, leading to optimal solutions for complex problems and more robust machine learning applications. The speed with which the quantum circuit can produce its randomized bit string is so great that there is no time to analyze and “cheat” the system.

“Quantum mechanical states do things that go beyond our day-to-day experience and so have the potential to provide capabilities and application that would otherwise be unattainable,” commented Joe Incandela, UC Santa Barbara’s vice chancellor for research. “The team has demonstrated the ability to reliably create and repeatedly sample complicated quantum states involving 53 entangled elements to carry out an exercise that would take millennia to do with a classical supercomputer. This is a major accomplishment. We are at the threshold of a new era of knowledge acquisition.”

Looking ahead

With an achievement like “quantum supremacy,” it’s tempting to think that the UC Santa Barbara/Google researchers will plant their flag and rest easy. But for Foxen, Chiaro, Martinis and the rest of the UCSB/Google AI Quantum group, this is just the beginning.

“It’s kind of a continuous improvement mindset,” Foxen said. “There are always projects in the works.” In the near term, further improvements to these “noisy” qubits may enable the simulation of interesting phenomena in quantum mechanics, such as thermalization, or the vast amount of possibility in the realms of materials and chemistry.

In the long term, however, the scientists are always looking to improve coherence times, or, at the other end, to detect and fix errors, which would take many additional qubits per qubit being checked. These efforts have been running parallel to the design and build of the quantum computer itself, and ensure the researchers have a lot of work before hitting their next milestone.

“It’s been an honor and a pleasure to be associated with this team,” Chiaro said. “It’s a great collection of strong technical contributors with great leadership and the whole team really synergizes well.”

Here’s a link to and a citation for the paper,

Quantum supremacy using a programmable superconducting processor by Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandrà, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven & John M. Martinis. Nature volume 574, pages505–510 (2019) DOI: https://doi.org/10.1038/s41586-019-1666-5 Issue Date 24 October 2019

This paper appears to be open access.

Seeing the future with quantum computing

Researchers at the University of Sydney (Australia) have demonstrated the ability to see the ‘quantum future’ according to a Jan. 16, 2017 news item on ScienceDaily,

Scientists at the University of Sydney have demonstrated the ability to “see” the future of quantum systems, and used that knowledge to preempt their demise, in a major achievement that could help bring the strange and powerful world of quantum technology closer to reality.

The applications of quantum-enabled technologies are compelling and already demonstrating significant impacts — especially in the realm of sensing and metrology. And the potential to build exceptionally powerful quantum computers using quantum bits, or qubits, is driving investment from the world’s largest companies.

However a significant obstacle to building reliable quantum technologies has been the randomisation of quantum systems by their environments, or decoherence, which effectively destroys the useful quantum character.

The physicists have taken a technical quantum leap in addressing this, using techniques from big data to predict how quantum systems will change and then preventing the system’s breakdown from occurring.

A Jan. 14, 2017 University of Sydney press release (also on EurekAlert), which originated the news item, expands on the theme,

“Much the way the individual components in mobile phones will eventually fail, so too do quantum systems,” said the paper’s senior author Professor Michael J.  Biercuk.

“But in quantum technology the lifetime is generally measured in fractions of a second, rather than years.”

Professor Biercuk, from the University of Sydney’s School of Physics and a chief investigator at the Australian Research Council’s Centre of Excellence for Engineered Quantum Systems, said his group had demonstrated it was possible to suppress decoherence in a preventive manner. The key was to develop a technique to predict how the system would disintegrate.

Professor Biercuk highlighted the challenges of making predictions in a quantum world: “Humans routinely employ predictive techniques in our daily experience; for instance, when we play tennis we predict where the ball will end up based on observations of the airborne ball,” he said.

“This works because the rules that govern how the ball will move, like gravity, are regular and known.  But what if the rules changed randomly while the ball was on its way to you?  In that case it’s next to impossible to predict the future behavior of that ball.

“And yet this situation is exactly what we had to deal with because the disintegration of quantum systems is random. Moreover, in the quantum realm observation erases quantumness, so our team needed to be able to guess how and when the system would randomly break.

“We effectively needed to swing at the randomly moving tennis ball while blindfolded.”

The team turned to machine learning for help in keeping their quantum systems – qubits realised in trapped atoms – from breaking.

What might look like random behavior actually contained enough information for a computer program to guess how the system would change in the future. It could then predict the future without direct observation, which would otherwise erase the system’s useful characteristics.

The predictions were remarkably accurate, allowing the team to use their guesses preemptively to compensate for the anticipated changes.

Doing this in real time allowed the team to prevent the disintegration of the quantum character, extending the useful lifetime of the qubits.

“We know that building real quantum technologies will require major advances in our ability to control and stabilise qubits – to make them useful in applications,” Professor Biercuk said.

Our techniques apply to any qubit, built in any technology, including the special superconducting circuits being used by major corporations.

“We’re excited to be developing new capabilities that turn quantum systems from novelties into useful technologies. The quantum future is looking better all the time,” Professor Biercuk said.

Here’s a link to and a  citation for the paper,

Prediction and real-time compensation of qubit decoherence via machine learning by Sandeep Mavadia, Virginia Frey, Jarrah Sastrawan, Stephen Dona, & Michael J. Biercuk. Nature Communications 8, Article number: 14106 (2017) doi:10.1038/ncomms14106 Published online: 16 January 2017

This paper is open access.

Connecting chaos and entanglement

Researchers seem to have stumbled across a link between classical and quantum physics. A July 12, 2016 University of California at Santa Barbara (UCSB) news release (also on EurekAlert) by Sonia Fernandez provides a description of both classical and quantum physics, as well as, the research that connects the two,

Using a small quantum system consisting of three superconducting qubits, researchers at UC Santa Barbara and Google have uncovered a link between aspects of classical and quantum physics thought to be unrelated: classical chaos and quantum entanglement. Their findings suggest that it would be possible to use controllable quantum systems to investigate certain fundamental aspects of nature.

“It’s kind of surprising because chaos is this totally classical concept — there’s no idea of chaos in a quantum system,” Charles Neill, a researcher in the UCSB Department of Physics and lead author of a paper that appears in Nature Physics. “Similarly, there’s no concept of entanglement within classical systems. And yet it turns out that chaos and entanglement are really very strongly and clearly related.”

Initiated in the 15th century, classical physics generally examines and describes systems larger than atoms and molecules. It consists of hundreds of years’ worth of study including Newton’s laws of motion, electrodynamics, relativity, thermodynamics as well as chaos theory — the field that studies the behavior of highly sensitive and unpredictable systems. One classic example of chaos theory is the weather, in which a relatively small change in one part of the system is enough to foil predictions — and vacation plans — anywhere on the globe.

At smaller size and length scales in nature, however, such as those involving atoms and photons and their behaviors, classical physics falls short. In the early 20th century quantum physics emerged, with its seemingly counterintuitive and sometimes controversial science, including the notions of superposition (the theory that a particle can be located in several places at once) and entanglement (particles that are deeply linked behave as such despite physical distance from one another).

And so began the continuing search for connections between the two fields.

All systems are fundamentally quantum systems, according [to] Neill, but the means of describing in a quantum sense the chaotic behavior of, say, air molecules in an evacuated room, remains limited.

Imagine taking a balloon full of air molecules, somehow tagging them so you could see them and then releasing them into a room with no air molecules, noted co-author and UCSB/Google researcher Pedram Roushan. One possible outcome is that the air molecules remain clumped together in a little cloud following the same trajectory around the room. And yet, he continued, as we can probably intuit, the molecules will more likely take off in a variety of velocities and directions, bouncing off walls and interacting with each other, resting after the room is sufficiently saturated with them.

“The underlying physics is chaos, essentially,” he said. The molecules coming to rest — at least on the macroscopic level — is the result of thermalization, or of reaching equilibrium after they have achieved uniform saturation within the system. But in the infinitesimal world of quantum physics, there is still little to describe that behavior. The mathematics of quantum mechanics, Roushan said, do not allow for the chaos described by Newtonian laws of motion.

To investigate, the researchers devised an experiment using three quantum bits, the basic computational units of the quantum computer. Unlike classical computer bits, which utilize a binary system of two possible states (e.g., zero/one), a qubit can also use a superposition of both states (zero and one) as a single state. Additionally, multiple qubits can entangle, or link so closely that their measurements will automatically correlate. By manipulating these qubits with electronic pulses, Neill caused them to interact, rotate and evolve in the quantum analog of a highly sensitive classical system.

The result is a map of entanglement entropy of a qubit that, over time, comes to strongly resemble that of classical dynamics — the regions of entanglement in the quantum map resemble the regions of chaos on the classical map. The islands of low entanglement in the quantum map are located in the places of low chaos on the classical map.

“There’s a very clear connection between entanglement and chaos in these two pictures,” said Neill. “And, it turns out that thermalization is the thing that connects chaos and entanglement. It turns out that they are actually the driving forces behind thermalization.

“What we realize is that in almost any quantum system, including on quantum computers, if you just let it evolve and you start to study what happens as a function of time, it’s going to thermalize,” added Neill, referring to the quantum-level equilibration. “And this really ties together the intuition between classical thermalization and chaos and how it occurs in quantum systems that entangle.”

The study’s findings have fundamental implications for quantum computing. At the level of three qubits, the computation is relatively simple, said Roushan, but as researchers push to build increasingly sophisticated and powerful quantum computers that incorporate more qubits to study highly complex problems that are beyond the ability of classical computing — such as those in the realms of machine learning, artificial intelligence, fluid dynamics or chemistry — a quantum processor optimized for such calculations will be a very powerful tool.

“It means we can study things that are completely impossible to study right now, once we get to bigger systems,” said Neill.

Experimental link between quantum entanglement (left) and classical chaos (right) found using a small quantum computer. Photo Credit: Courtesy Image (Courtesy: UCSB)

Experimental link between quantum entanglement (left) and classical chaos (right) found using a small quantum computer. Photo Credit: Courtesy Image (Courtesy: UCSB)

Here’s a link to and a citation for the paper,

Ergodic dynamics and thermalization in an isolated quantum system by C. Neill, P. Roushan, M. Fang, Y. Chen, M. Kolodrubetz, Z. Chen, A. Megrant, R. Barends, B. Campbell, B. Chiaro, A. Dunsworth, E. Jeffrey, J. Kelly, J. Mutus, P. J. J. O’Malley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, A. Polkovnikov, & J. M. Martinis. Nature Physics (2016)  doi:10.1038/nphys3830 Published online 11 July 2016

This paper is behind a paywall.

D-Wave passes 1000-qubit barrier

A local (Vancouver, Canada-based, quantum computing company, D-Wave is making quite a splash lately due to a technical breakthrough.  h/t’s Speaking up for Canadian Science for Business in Vancouver article and Nanotechnology Now for Harris & Harris Group press release and Economist article.

A June 22, 2015 article by Tyler Orton for Business in Vancouver describes D-Wave’s latest technical breakthrough,

“This updated processor will allow significantly more complex computational problems to be solved than ever before,” Jeremy Hilton, D-Wave’s vice-president of processor development, wrote in a June 22 [2015] blog entry.

Regular computers use two bits – ones and zeroes – to make calculations, while quantum computers rely on qubits.

Qubits possess a “superposition” that allow it to be one and zero at the same time, meaning it can calculate all possible values in a single operation.

But the algorithm for a full-scale quantum computer requires 8,000 qubits.

A June 23, 2015 Harris & Harris Group press release adds more information about the breakthrough,

Harris & Harris Group, Inc. (Nasdaq: TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that it has successfully fabricated 1,000 qubit processors that power its quantum computers.  D-Wave’s quantum computer runs a quantum annealing algorithm to find the lowest points, corresponding to optimal or near optimal solutions, in a virtual “energy landscape.”  Every additional qubit doubles the search space of the processor.  At 1,000 qubits, the new processor considers 21000 possibilities simultaneously, a search space which is substantially larger than the 2512 possibilities available to the company’s currently available 512 qubit D-Wave Two. In fact, the new search space contains far more possibilities than there are particles in the observable universe.

A June 22, 2015 D-Wave news release, which originated the technical details about the breakthrough found in the Harris & Harris press release, provides more information along with some marketing hype (hyperbole), Note: Links have been removed,

As the only manufacturer of scalable quantum processors, D-Wave breaks new ground with every succeeding generation it develops. The new processors, comprising over 128,000 Josephson tunnel junctions, are believed to be the most complex superconductor integrated circuits ever successfully yielded. They are fabricated in part at D-Wave’s facilities in Palo Alto, CA and at Cypress Semiconductor’s wafer foundry located in Bloomington, Minnesota.

“Temperature, noise, and precision all play a profound role in how well quantum processors solve problems.  Beyond scaling up the technology by doubling the number of qubits, we also achieved key technology advances prioritized around their impact on performance,” said Jeremy Hilton, D-Wave vice president, processor development. “We expect to release benchmarking data that demonstrate new levels of performance later this year.”

The 1000-qubit milestone is the result of intensive research and development by D-Wave and reflects a triumph over a variety of design challenges aimed at enhancing performance and boosting solution quality. Beyond the much larger number of qubits, other significant innovations include:

  •  Lower Operating Temperature: While the previous generation processor ran at a temperature close to absolute zero, the new processor runs 40% colder. The lower operating temperature enhances the importance of quantum effects, which increases the ability to discriminate the best result from a collection of good candidates.​
  • Reduced Noise: Through a combination of improved design, architectural enhancements and materials changes, noise levels have been reduced by 50% in comparison to the previous generation. The lower noise environment enhances problem-solving performance while boosting reliability and stability.
  • Increased Control Circuitry Precision: In the testing to date, the increased precision coupled with the noise reduction has demonstrated improved precision by up to 40%. To accomplish both while also improving manufacturing yield is a significant achievement.
  • Advanced Fabrication:  The new processors comprise over 128,000 Josephson junctions (tunnel junctions with superconducting electrodes) in a 6-metal layer planar process with 0.25μm features, believed to be the most complex superconductor integrated circuits ever built.
  • New Modes of Use: The new technology expands the boundaries of ways to exploit quantum resources.  In addition to performing discrete optimization like its predecessor, firmware and software upgrades will make it easier to use the system for sampling applications.

“Breaking the 1000 qubit barrier marks the culmination of years of research and development by our scientists, engineers and manufacturing team,” said D-Wave CEO Vern Brownell. “It is a critical step toward bringing the promise of quantum computing to bear on some of the most challenging technical, commercial, scientific, and national defense problems that organizations face.”

A June 20, 2015 article in The Economist notes there is now commercial interest as it provides good introductory information about quantum computing. The article includes an analysis of various research efforts in Canada (they mention D-Wave), the US, and the UK. These excerpts don’t do justice to the article but will hopefully whet your appetite or provide an overview for anyone with limited time,

A COMPUTER proceeds one step at a time. At any particular moment, each of its bits—the binary digits it adds and subtracts to arrive at its conclusions—has a single, definite value: zero or one. At that moment the machine is in just one state, a particular mixture of zeros and ones. It can therefore perform only one calculation next. This puts a limit on its power. To increase that power, you have to make it work faster.

But bits do not exist in the abstract. Each depends for its reality on the physical state of part of the computer’s processor or memory. And physical states, at the quantum level, are not as clear-cut as classical physics pretends. That leaves engineers a bit of wriggle room. By exploiting certain quantum effects they can create bits, known as qubits, that do not have a definite value, thus overcoming classical computing’s limits.

… The biggest question is what the qubits themselves should be made from.

A qubit needs a physical system with two opposite quantum states, such as the direction of spin of an electron orbiting an atomic nucleus. Several things which can do the job exist, and each has its fans. Some suggest nitrogen atoms trapped in the crystal lattices of diamonds. Calcium ions held in the grip of magnetic fields are another favourite. So are the photons of which light is composed (in this case the qubit would be stored in the plane of polarisation). And quasiparticles, which are vibrations in matter that behave like real subatomic particles, also have a following.

The leading candidate at the moment, though, is to use a superconductor in which the qubit is either the direction of a circulating current, or the presence or absence of an electric charge. Both Google and IBM are banking on this approach. It has the advantage that superconducting qubits can be arranged on semiconductor chips of the sort used in existing computers. That, the two firms think, should make them easier to commercialise.

Google is also collaborating with D-Wave of Vancouver, Canada, which sells what it calls quantum annealers. The field’s practitioners took much convincing that these devices really do exploit the quantum advantage, and in any case they are limited to a narrower set of problems—such as searching for images similar to a reference image. But such searches are just the type of application of interest to Google. In 2013, in collaboration with NASA and USRA, a research consortium, the firm bought a D-Wave machine in order to put it through its paces. Hartmut Neven, director of engineering at Google Research, is guarded about what his team has found, but he believes D-Wave’s approach is best suited to calculations involving fewer qubits, while Dr Martinis and his colleagues build devices with more.

It’s not clear to me if the writers at The Economist were aware of  D-Wave’s latest breakthrough at the time of writing but I think not. In any event, they (The Economist writers) have included a provocative tidbit about quantum encryption,

Documents released by Edward Snowden, a whistleblower, revealed that the Penetrating Hard Targets programme of America’s National Security Agency was actively researching “if, and how, a cryptologically useful quantum computer can be built”. In May IARPA [Intellligence Advanced Research Projects Agency], the American government’s intelligence-research arm, issued a call for partners in its Logical Qubits programme, to make robust, error-free qubits. In April, meanwhile, Tanja Lange and Daniel Bernstein of Eindhoven University of Technology, in the Netherlands, announced PQCRYPTO, a programme to advance and standardise “post-quantum cryptography”. They are concerned that encrypted communications captured now could be subjected to quantum cracking in the future. That means strong pre-emptive encryption is needed immediately.

I encourage you to read the Economist article.

Two final comments. (1) The latest piece, prior to this one, about D-Wave was in a Feb. 6, 2015 posting about then new investment into the company. (2) A Canadian effort in the field of quantum cryptography was mentioned in a May 11, 2015 posting (scroll down about 50% of the way) featuring a profile of Raymond Laflamme, at the University of Waterloo’s Institute of Quantum Computing in the context of an announcement about science media initiative Research2Reality.