Tag Archives: Nvidia

Microsoft, D-Wave Systems, quantum computing, and quantum supremacy?

Before diving into some of the latest quantum computing doings, here’s why quantum computing is so highly prized and chased after, from the Quantum supremacy Wikipedia entry, Note: Links have been removed,

In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum computer can solve a problem that no classical computer can solve in any feasible amount of time, irrespective of the usefulness of the problem.[1][2][3] The term was coined by John Preskill in 2011,[1][4] but the concept dates to Yuri Manin’s 1980[5] and Richard Feynman’s 1981[6] proposals of quantum computing.

Quantum supremacy and quantum advantage have been mentioned a few times here over the years. You can check my March 6, 2020 posting for when researchers from the University of California at Santa Barbara claimed quantum supremacy and my July 31, 2023 posting for when D-Wave Systems claimed a quantum advantage on optimization problems. I’d understood quantum supremacy and quantum advantage to be synonymous but according the article in Betakit (keep scrolling down to the D-Wave subhead and then, to ‘A controversy of sorts’ subhead in this posting), that’s not so.

The latest news on the quantum front comes from Microsoft (February 2025) and D-Wave systems (March 2025).

Microsoft claims a new state of matter for breakthroughs in quantum computing

Here’s the February 19, 2025 news announcement from Microsoft’s Chetan Nayak, Technical Fellow and Corporate Vice President of Quantum Hardware, Note: Links have been removed,

Quantum computers promise to transform science and society—but only after they achieve the scale that once seemed distant and elusive, and their reliability is ensured by quantum error correction. Today, we’re announcing rapid advancements on the path to useful quantum computing:

  • Majorana 1: the world’s first Quantum Processing Unit (QPU) powered by a Topological Core, designed to scale to a million qubits on a single chip.
  • A hardware-protected topological qubit: research published today in Nature, along with data shared at the Station Q meeting, demonstrate our ability to harness a new type of material and engineer a radically different type of qubit that is small, fast, and digitally controlled.
  • A device roadmap to reliable quantum computation: our path from single-qubit devices to arrays that enable quantum error correction.
  • Building the world’s first fault-tolerant prototype (FTP) based on topological qubits: Microsoft is on track to build an FTP of a scalable quantum computer—in years, not decades—as part of the final phase of the Defense Advanced Research Projects Agency (DARPA) Underexplored Systems for Utility-Scale Quantum Computing (US2QC) program.

Together, these milestones mark a pivotal moment in quantum computing as we advance from scientific exploration to technological innovation.

Harnessing a new type of material

All of today’s announcements build on our team’s recent breakthrough: the world’s first topoconductor. This revolutionary class of materials enables us to create topological superconductivity, a new state of matter that previously existed only in theory. The advance stems from Microsoft’s innovations in the design and fabrication of gate-defined devices that combine indium arsenide (a semiconductor) and aluminum (a superconductor). When cooled to near absolute zero and tuned with magnetic fields, these devices form topological superconducting nanowires with Majorana Zero Modes (MZMs) at the wires’ ends.

Chris Vallance’s February 19, 2025 article for the British Broadcasting Corporation (BBC) news online website provides a description of Microsoft’s claims and makes note of the competitive quantum research environment,

Microsoft has unveiled a new chip called Majorana 1 that it says will enable the creation of quantum computers able to solve “meaningful, industrial-scale problems in years, not decades”.

It is the latest development in quantum computing – tech which uses principles of particle physics to create a new type of computer able to solve problems ordinary computers cannot.

Creating quantum computers powerful enough to solve important real-world problems is very challenging – and some experts believe them to be decades away.

Microsoft says this timetable can now be sped up because of the “transformative” progress it has made in developing the new chip involving a “topological conductor”, based on a new material it has produced.

The firm believes its topoconductor has the potential to be as revolutionary as the semiconductor was in the history of computing.

But experts have told the BBC more data is needed before the significance of the new research – and its effect on quantum computing – can be fully assessed.

Jensen Huang – boss of the leading chip firm, Nvidia – said in January he believed “very useful” quantum computing would come in 20 years.

Chetan Nayak, a technical fellow of quantum hardware at Microsoft, said he believed the developments would shake up conventional thinking about the future of quantum computers.

“Many people have said that quantum computing, that is to say useful quantum computers, are decades away,” he said. “I think that this brings us into years rather than decades.”

Travis Humble, director of the Quantum Science Center of Oak Ridge National Laboratory in the US, said he agreed Microsoft would now be able to deliver prototypes faster – but warned there remained work to do.

“The long term goals for solving industrial applications on quantum computers will require scaling up these prototypes even further,” he said.

While rivals produced a steady stream of announcements – notably Google’s “Willow” at the end of 2024 – Microsoft seemed to be taking longer.

Pursuing this approach was, in the company’s own words, a “high-risk, high-rewards” strategy, but one it now believes is going to pay off.

If you have the time, do read Vallance’s February 19, 2025 article.

The research paper

Purdue University’s (Indiana, US) February 25, 2025 news release on EurekAlert announces publication of the research, Note: Links have been removed,

Microsoft Quantum published an article in Nature on Feb. 19 [2025] detailing recent advances in the measurement of quantum devices that will be needed to realize a topological quantum computer. Among the authors are Microsoft scientists and engineers who conduct research at Microsoft Quantum Lab West Lafayette, located at Purdue University. In an announcement by Microsoft Quantum, the team describes the operation of a device that is a necessary building block for a topological quantum computer. The published results are an important milestone along the path to construction of quantum computers that are potentially more robust and powerful than existing technologies.

“Our hope for quantum computation is that it will aid chemists, materials scientists and engineers working on the design and manufacturing of new materials that are so important to our daily lives,” said Michael Manfra, scientific director of Microsoft Quantum Lab West Lafayette and the Bill and Dee O’Brien Distinguished Professor of Physics and Astronomy, professor of materials engineering, and professor of electrical and computer engineering at Purdue. “The promise of quantum computation is in accelerating scientific discovery and its translation into useful technology. For example, if quantum computers reduce the time and cost to produce new lifesaving therapeutic drugs, that is real societal impact.” 

The Microsoft Quantum Lab West Lafayette team advanced the complex layered materials that make up the quantum plane of the full device architecture used in the tests. Microsoft scientists working with Manfra are experts in advanced semiconductor growth techniques, including molecular beam epitaxy, that are used to build low-dimensional electron systems that form the basis for quantum bits, or qubits. They built the semiconductor and superconductor layers with atomic layer precision, tailoring the material’s properties to those needed for the device architecture.

Manfra, a member of the Purdue Quantum Science and Engineering Institute, credited the strong relationship between Purdue and Microsoft, built over the course of a decade, with the advances conducted at Microsoft Quantum Lab West Lafayette. In 2017 Purdue deepened its relationship with Microsoft with a multiyear agreement that includes embedding Microsoft employees with Manfra’s research team at Purdue.

“This was a collaborative effort by a very sophisticated team, with a vital contribution from the Microsoft scientists at Purdue,” Manfra said. “It’s a Microsoft team achievement, but it’s also the culmination of a long-standing partnership between Purdue and Microsoft. It wouldn’t have been possible without an environment at Purdue that was conducive to this mode of work — I attempted to blend industrial with academic research to the betterment of both communities. I think that’s a success story.”

Quantum science and engineering at Purdue is a pillar of the Purdue Computes initiative, which is focused on advancing research in computing, physical AI, semiconductors and quantum technologies.

“This research breakthrough in the measurement of the state of quasi particles is a milestone in the development of topological quantum computing, and creates a watershed moment in the semiconductor-superconductor hybrid structure,” Purdue President Mung Chiang said. “Marking also the latest success in the strategic initiative of Purdue Computes, the deep collaboration that Professor Manfra and his team have created with the Microsoft Quantum Lab West Lafayette on the Purdue campus exemplifies the most impactful industry research partnership at any American university today.”

Most approaches to quantum computers rely on local degrees of freedom to encode information. The spin of an electron is a classic example of a qubit. But an individual spin is prone to disturbance — by relatively common things like heat, vibrations or interactions with other quantum particles — which can corrupt quantum information stored in the qubit, necessitating a great deal of effort in detecting and correcting errors. Instead of spin, topological quantum computers store information in a more distributed manner; the qubit state is encoded in the state of many particles acting in concert. Consequently, it is harder to scramble the information as the state of all the particles must be changed to alter the qubit state.

In the Nature paper, the Microsoft team was able to accurately and quickly measure the state of quasi particles that form the basis of the qubit.

“The device is used to measure a basic property of a topological qubit quickly,” Manfra said. “The team is excited to build on these positive results.”

“The team in West Lafayette pushed existing epitaxial technology to a new state-of-the-art for semiconductor-superconductor hybrid structures to ensure a perfect interface between each of the building blocks of the Microsoft hybrid system,” said Sergei Gronin, a Microsoft Quantum Lab scientist.

“The materials quality that is required for quantum computing chips necessitates constant improvements, so that’s one of the biggest challenges,” Gronin said. “First, we had to adjust and improve semiconductor technology to meet a new level that nobody was able to achieve before. But equally important was how to create this hybrid system. To do that, we had to merge a semiconducting part and a superconducting part. And that means you need to perfect the semiconductor and the superconductor and perfect the interface between them.”

While work discussed in the Nature article was performed by Microsoft employees, the exposure to industrial-scale research and development is an outstanding opportunity for Purdue students in Manfra’s academic group as well. John Watson, Geoffrey Gardner and Saeed Fallahi, who are among the coauthors of the paper, earned their doctoral degrees under Manfra and now work for Microsoft Quantum at locations in Redmond, Washington, and Copenhagen, Denmark. Most of Manfra’s former students now work for quantum computing companies, including Microsoft. Tyler Lindemann, who works in the West Lafayette lab and helped to build the hybrid semiconductor-superconductor structures required for the device, is earning a doctoral degree from Purdue under Manfra’s supervision.

“Working in Professor Manfra’s lab in conjunction with my work for Microsoft Quantum has given me a head start in my professional development, and been fruitful for my academic work,” Lindemann said. “At the same time, many of the world-class scientists and engineers at Microsoft Quantum have some background in academia, and being able to draw from their knowledge and experience is an indispensable resource in my graduate studies. From both perspectives, it’s a great opportunity.”

Here’s a link to and a citation for the paper,

Interferometric single-shot parity measurement in InAs–Al hybrid devices by Microsoft Azure Quantum, Morteza Aghaee, Alejandro Alcaraz Ramirez, Zulfi Alam, Rizwan Ali, Mariusz Andrzejczuk, Andrey Antipov, Mikhail Astafev, Amin Barzegar, Bela Bauer, Jonathan Becker, Umesh Kumar Bhaskar, Alex Bocharov, Srini Boddapati, David Bohn, Jouri Bommer, Leo Bourdet, Arnaud Bousquet, Samuel Boutin, Lucas Casparis, Benjamin J. Chapman, Sohail Chatoor, Anna Wulff Christensen, Cassandra Chua, Patrick Codd, William Cole, Paul Cooper, Fabiano Corsetti, Ajuan Cui, Paolo Dalpasso, Juan Pablo Dehollain, Gijs de Lange, Michiel de Moor, Andreas Ekefjärd, Tareq El Dandachi, Juan Carlos Estrada Saldaña, Saeed Fallahi, Luca Galletti, Geoff Gardner, Deshan Govender, Flavio Griggio, Ruben Grigoryan, Sebastian Grijalva, Sergei Gronin, Jan Gukelberger, Marzie Hamdast, Firas Hamze, Esben Bork Hansen, Sebastian Heedt, Zahra Heidarnia, Jesús Herranz Zamorano, Samantha Ho, Laurens Holgaard, John Hornibrook, Jinnapat Indrapiromkul, Henrik Ingerslev, Lovro Ivancevic, Thomas Jensen, Jaspreet Jhoja, Jeffrey Jones, Konstantin V. Kalashnikov, Ray Kallaher, Rachpon Kalra, Farhad Karimi, Torsten Karzig, Evelyn King, Maren Elisabeth Kloster, Christina Knapp, Dariusz Kocon, Jonne V. Koski, Pasi Kostamo, Mahesh Kumar, Tom Laeven, Thorvald Larsen, Jason Lee, Kyunghoon Lee, Grant Leum, Kongyi Li, Tyler Lindemann, Matthew Looij, Julie Love, Marijn Lucas, Roman Lutchyn, Morten Hannibal Madsen, Nash Madulid, Albert Malmros, Michael Manfra, Devashish Mantri, Signe Brynold Markussen, Esteban Martinez, Marco Mattila, Robert McNeil, Antonio B. Mei, Ryan V. Mishmash, Gopakumar Mohandas, Christian Mollgaard, Trevor Morgan, George Moussa, Chetan Nayak, Jens Hedegaard Nielsen, Jens Munk Nielsen, William Hvidtfelt Padkar Nielsen, Bas Nijholt, Mike Nystrom, Eoin O’Farrell, Thomas Ohki, Keita Otani, Brian Paquelet Wütz, Sebastian Pauka, Karl Petersson, Luca Petit, Dima Pikulin, Guen Prawiroatmodjo, Frank Preiss, Eduardo Puchol Morejon, Mohana Rajpalke, Craig Ranta, Katrine Rasmussen, David Razmadze, Outi Reentila, David J. Reilly, Yuan Ren, Ken Reneris, Richard Rouse, Ivan Sadovskyy, Lauri Sainiemi, Irene Sanlorenzo, Emma Schmidgall, Cristina Sfiligoj, Mustafeez Bashir Shah, Kevin Simoes, Shilpi Singh, Sarat Sinha, Thomas Soerensen, Patrick Sohr, Tomas Stankevic, Lieuwe Stek, Eric Stuppard, Henri Suominen, Judith Suter, Sam Teicher, Nivetha Thiyagarajah, Raj Tholapi, Mason Thomas, Emily Toomey, Josh Tracy, Michelle Turley, Shivendra Upadhyay, Ivan Urban, Kevin Van Hoogdalem, David J. Van Woerkom, Dmitrii V. Viazmitinov, Dominik Vogel, John Watson, Alex Webster, Joseph Weston, Georg W. Winkler, Di Xu, Chung Kai Yang, Emrah Yucelen, Roland Zeisel, Guoji Zheng & Justin Zilke. Nature 638, 651–655 (2025). DOI: https://doi.org/10.1038/s41586-024-08445-2 Published online: 19 February 2025 Issue Date: 20 February 2025

This paper is open access. Note: I usually tag all of the authors but not this time.

Controversy over this and previous Microsoft quantum computing claims

Elizabeth Hlavinka’s March 17, 2025 article for Salon.com provides an overview, Note: Links have been removed,

The matter making up the world around us has long-since been organized into three neat categories: solids, liquids and gases. But last month [February 2025], Microsoft announced that it had allegedly discovered another state of matter originally theorized to exist in 1937. 

This new state of matter called the Majorana zero mode is made up of quasiparticles, which act as their own particle and antiparticle. The idea is that the Majorana zero mode could be used to build a quantum computer, which could help scientists answer complex questions that standard computers are not capable of solving, with implications for medicine, cybersecurity and artificial intelligence.

In late February [2025], Sen. Ted Cruz presented Microsoft’s new computer chip at a congressional hearing, saying, “Technologies like this new chip I hold in the palm of my hand, the Majorana 1 quantum chip, are unlocking a new era of computing that will transform industries from health care to energy, solving problems that today’s computers simply cannot.”

However, Microsoft’s announcement, claiming a “breakthrough in quantum computing,” was met with skepticism from some physicists in the field. Proving that this form of quantum computing can work requires first demonstrating the existence of Majorana quasiparticles, measuring what the Majorana particles are doing, and creating something called a topological qubit used to store quantum information.

But some say that not all of the data necessary to prove this has been included in the research paper published in Nature, on which this announcement is based. And due to a fraught history of similar claims from the company being disputed and ultimately rescinded, some are extra wary of the results. [emphasis mine]

It’s not the first time Microsoft has faced backlash from presenting findings in the field. In 2018, the company reported that they had detected the presence of Majorana zero-modes in a research paper, but it was retracted by Nature, the journal that published it after a report from independent experts put their findings under more intense scrutiny.

In the [2018] report, four physicists not involved in the research concluded that it did not appear that Microsoft had intentionally misrepresented the data, but instead seemed to be “caught up in the excitement of the moment [emphasis mine].”

Establishing the existence of these particles is extremely complex in part because disorder in the device can create signals that mimic these quasiparticles when they are not actually there. 

Modern computers in use today are encoded in bits, which can either be in a zero state (no current flowing through them), or a one state (current flowing.) These bits work together to send information and signals that communicate with the computer, powering everything from cell phones to video games.

Companies like Google, IBM and Amazon have invested in designing another form of quantum computer that uses chips built with “qubits,” or quantum bits. Qubits can exist in both zero and one states at the same time due to a phenomenon called superposition. 

However, qubits are subject to external noise from the environment that can affect their performance, said Dr. Paolo Molignini, a researcher in theoretical quantum physics at Stockholm University.

“Because qubits are in a superposition of zero and one, they are very prone to errors and they are very prone to what is called decoherence, which means there could be noise, thermal fluctuations or many things that can collapse the state of the qubits,” Molignini told Salon in a video call. “Then you basically lose all of the information that you were encoding.”

In December [2024], Google said its quantum computer could perform a calculation that a standard computer could complete in 10 septillion years — a period far longer than the age of the universe — in just under five minutes.

However, a general-purpose computer would require billions of qubits, so these approaches are still a far cry from having practical applications, said Dr. Patrick Lee, a physicist at the Massachusetts Institute of Technology [MIT], who co-authored the report leading to the 2018 Nature paper’s retraction.

Microsoft is taking a different approach to quantum computing by trying to develop  a topological qubit, which has the ability to store information in multiple places at once. Topological qubits exist within the Majorana zero states and are appealing because they can theoretically offer greater protection against environmental noise that destroys information within a quantum system.

Think of it like an arrow, where the arrowhead holds a portion of the information and the arrow tail holds the rest, Lee said. Distributing information across space like this is called topological protection.

“If you are able to put them far apart from each other, then you have a chance of maintaining the identity of the arrow even if it is subject to noise,” Lee told Salon in a phone interview. “The idea is that if the noise affects the head, it doesn’t kill the arrow and if it affects only the tail it doesn’t kill your arrow. It has to affect both sides simultaneously to kill your arrow, and that is very unlikely if you are able to put them apart.”

… Lee believes that even if the data doesn’t entirely prove that topological qubits exist in the Majorana zero-state, it still represents a scientific advancement. But he noted that several important issues need to be solved before it has practical implications. For one, the coherence time of these particles — or how long they can exist without being affected by environmental noise — is still very short, he explained.

“They make a measurement, come back, and the qubit has changed, so you have lost your coherence,” Lee said. “With this very short time, you cannot do anything with it.”

“I just wish they [Microsoft] were a bit more careful with their claims because I fear that if they don’t measure up to what they are saying, there might be a backlash at some point where people say, ‘You promised us all these fancy things and where are they now?’” Molignini said. “That might damage the entire quantum community, not just themselves.”

Iif you have the time, please read Hlavinka’s March 17, 2025 article in its entirety .

D-Wave Quantum Systems claims quantum supremacy over real world problem solution

A March 15, 2025 article by Bob Yirka for phys.org announces the news from D-Wave Quantum Systems. Note: The company, which had its headquarters in Canada (Burnaby, BC) now seems to be a largely US company with its main headquarters in Palo Alto, California and an ancillary or junior (?) headquarters in Canada, Note: A link has been removed,

A team of quantum computer researchers at quantum computer maker D-Wave, working with an international team of physicists and engineers, is claiming that its latest quantum processor has been used to run a quantum simulation faster than could be done with a classical computer.

In their paper published in the journal Science, the group describes how they ran a quantum version of a mathematical approximation regarding how matter behaves when it changes states, such as from a gas to a liquid—in a way that they claim would be nearly impossible to conduct on a traditional computer.

Here’s a March 12, 2025 D-Wave Systems (now D-Wave Quantum Systems) news release touting its real world problem solving quantum supremacy,

New landmark peer-reviewed paper published in Science, “Beyond-Classical Computation in Quantum Simulation,” unequivocally validates D-Wave’s achievement of the world’s first and only demonstration of quantum computational supremacy on a useful, real-world problem

Research shows D-Wave annealing quantum computer performs magnetic materials simulation in minutes that would take nearly one million years and more than the world’s annual electricity consumption to solve using a classical supercomputer built with GPU clusters

D-Wave Advantage2 annealing quantum computer prototype used in supremacy achievement, a testament to the system’s remarkable performance capabilities

PALO ALTO, Calif. – March 12, 2025 – D-Wave Quantum Inc. (NYSE: QBTS) (“D-Wave” or the “Company”), a leader in quantum computing systems, software, and services and the world’s first commercial supplier of quantum computers, today announced a scientific breakthrough published in the esteemed journal Science, confirming that its annealing quantum computer outperformed one of the world’s most powerful classical supercomputers in solving complex magnetic materials simulation problems with relevance to materials discovery. The new landmark peer-reviewed paper, Beyond-Classical Computation in Quantum Simulation,” validates this achievement as the world’s first and only demonstration of quantum computational supremacy on a useful problem.

An international collaboration of scientists led by D-Wave performed simulations of quantum dynamics in programmable spin glasses—computationally hard magnetic materials simulation problems with known applications to business and science—on both D-Wave’s Advantage2TM prototype annealing quantum computer and the Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory. The work simulated the behavior of a suite of lattice structures and sizes across a variety of evolution times and delivered a multiplicity of important material properties. D-Wave’s quantum computer performed the most complex simulation in minutes and with a level of accuracy that would take nearly one million years using the supercomputer. In addition, it would require more than the world’s annual electricity consumption to solve this problem using the supercomputer, which is built with graphics processing unit (GPU) clusters.

“This is a remarkable day for quantum computing. Our demonstration of quantum computational supremacy on a useful problem is an industry first. All other claims of quantum systems outperforming classical computers have been disputed or involved random number generation of no practical value,” said Dr. Alan Baratz, CEO of D-Wave. “Our achievement shows, without question, that D-Wave’s annealing quantum computers are now capable of solving useful problems beyond the reach of the world’s most powerful supercomputers. We are thrilled that D-Wave customers can use this technology today to realize tangible value from annealing quantum computers.”

Realizing an Industry-First Quantum Computing Milestone
The behavior of materials is governed by the laws of quantum physics. Understanding the quantum nature of magnetic materials is crucial to finding new ways to use them for technological advancement, making materials simulation and discovery a vital area of research for D-Wave and the broader scientific community. Magnetic materials simulations, like those conducted in this work, use computer models to study how tiny particles not visible to the human eye react to external factors. Magnetic materials are widely used in medical imaging, electronics, superconductors, electrical networks, sensors, and motors.

“This research proves that D-Wave’s quantum computers can reliably solve quantum dynamics problems that could lead to discovery of new materials,” said Dr. Andrew King, senior distinguished scientist at D-Wave. “Through D-Wave’s technology, we can create and manipulate programmable quantum matter in ways that were impossible even a few years ago.”

Materials discovery is a computationally complex, energy-intensive and expensive task. Today’s supercomputers and high-performance computing (HPC) centers, which are built with tens of thousands of GPUs, do not always have the computational processing power to conduct complex materials simulations in a timely or energy-efficient manner. For decades, scientists have aspired to build a quantum computer capable of solving complex materials simulation problems beyond the reach of classical computers. D-Wave’s advancements in quantum hardware have made it possible for its annealing quantum computers to process these types of problems for the first time.

“This is a significant milestone made possible through over 25 years of research and hardware development at D-Wave, two years of collaboration across 11 institutions worldwide, and more than 100,000 GPU and CPU hours of simulation on one of the world’s fastest supercomputers as well as computing clusters in collaborating institutions,” said Dr. Mohammad Amin, chief scientist at D-Wave. “Besides realizing Richard Feynman’s vision of simulating nature on a quantum computer, this research could open new frontiers for scientific discovery and quantum application development.” 

Advantage2 System Demonstrates Powerful Performance Gains
The results shown in “Beyond-Classical Computation in Quantum Simulation” were enabled by D-Wave’s previous scientific milestones published in Nature Physics (2022) and Nature (2023), which theoretically and experimentally showed that quantum annealing provides a quantum speedup in complex optimization problems. These scientific advancements led to the development of the Advantage2 prototype’s fast anneal feature, which played an essential role in performing the precise quantum calculations needed to demonstrate quantum computational supremacy.

“The broader quantum computing research and development community is collectively building an understanding of the types of computations for which quantum computing can overtake classical computing. This effort requires ongoing and rigorous experimentation,” said Dr. Trevor Lanting, chief development officer at D-Wave. “This work is an important step toward sharpening that understanding, with clear evidence of where our quantum computer was able to outperform classical methods. We believe that the ability to recreate the entire suite of results we produced is not possible classically. We encourage our peers in academia to continue efforts to further define the line between quantum and classical capabilities, and we believe these efforts will help drive the development of ever more powerful quantum computing technology.”

The Advantage2 prototype used to achieve quantum computational supremacy is available for customers to use today via D-Wave’s Leap™ real-time quantum cloud service. The prototype provides substantial performance improvements from previous-generation Advantage systems, including increased qubit coherence, connectivity, and energy scale, which enables higher-quality solutions to larger, more complex problems. Moreover, D-Wave now has an Advantage2 processor that is four times larger than the prototype used in this work and has extended the simulations of this paper from hundreds of qubits to thousands of qubits, which are significantly larger than those described in this paper.

Leading Industry Voices Echo Support
Dr. Hidetoshi Nishimori, Professor, Department of Physics, Tokyo Institute of Technology:
“This paper marks a significant milestone in demonstrating the real-world applicability of large-scale quantum computing. Through rigorous benchmarking of quantum annealers against state-of-the-art classical methods, it convincingly establishes a quantum advantage in tackling practical problems, revealing the transformative potential of quantum computing at an unprecedented scale.”

Dr. Seth Lloyd, Professor of Quantum Mechanical Engineering, MIT:
Although large-scale, fully error-corrected quantum computers are years in the future, quantum annealers can probe the features of quantum systems today. In an elegant paper, the D-Wave group has used a large-scale quantum annealer to uncover patterns of entanglement in a complex quantum system that lie far beyond the reach of the most powerful classical computer. The D-Wave result shows the promise of quantum annealers for exploring exotic quantum effects in a wide variety of systems.”

Dr. Travis Humble, Director of Quantum Science Center, Distinguished Scientist at Oak Ridge National Laboratory:
“ORNL seeks to expand the frontiers of computation through many different avenues, and benchmarking quantum computing for materials science applications provides critical input to our understanding of new computational capabilities.”

Dr. Juan Carrasquilla, Associate Professor at the Department of Physics, ETH Zürich:
“I believe these results mark a critical scientific milestone for D-Wave. They also serve as an invitation to the scientific community, as these results offer a strong benchmark and motivation for developing novel simulation techniques for out-of-equilibrium dynamics in quantum many-body physics. Furthermore, I hope these findings encourage theoretical exploration of the computational challenges involved in performing such simulations, both classically and quantum-mechanically.”

Dr. Victor Martin-Mayor, Professor of Theoretical Physics, Universidad Complutense de Madrid:
“This paper is not only a tour-de-force for experimental physics, it is also remarkable for the clarity of the results. The authors have addressed a problem that is regarded both as important and as very challenging to a classical computer. The team has shown that their quantum annealer performs better at this task than the state-of-the-art methods for classical simulation.”

Dr. Alberto Nocera, Senior Staff Scientist, The University of British Columbia:
“Our work shows the impracticability of state-of-the-art classical simulations to simulate the dynamics of quantum magnets, opening the door for quantum technologies based on analog simulators to solve scientific questions that may otherwise remain unanswered using conventional computers.”

About D-Wave Quantum Inc.
D-Wave is a leader in the development and delivery of quantum computing systems, software, and services. We are the world’s first commercial supplier of quantum computers, and the only company building both annealing and gate-model quantum computers. Our mission is to help customers realize the value of quantum, today. Our 5,000+ qubit Advantage™ quantum computers, the world’s largest, are available on-premises or via the cloud, supported by 99.9% availability and uptime. More than 100 organizations trust D-Wave with their toughest computational challenges. With over 200 million problems submitted to our Advantage systems and Advantage2™ prototypes to date, our customers apply our technology to address use cases spanning optimization, artificial intelligence, research and more. Learn more about realizing the value of quantum computing today and how we’re shaping the quantum-driven industrial and societal advancements of tomorrow: www.dwavequantum.com.

Forward-Looking Statements
Certain statements in this press release are forward-looking, as defined in the Private Securities Litigation Reform Act of 1995. These statements involve risks, uncertainties, and other factors that may cause actual results to differ materially from the information expressed or implied by these forward-looking statements and may not be indicative of future results. These forward-looking statements are subject to a number of risks and uncertainties, including, among others, various factors beyond management’s control, including the risks set forth under the heading “Risk Factors” discussed under the caption “Item 1A. Risk Factors” in Part I of our most recent Annual Report on Form 10-K or any updates discussed under the caption “Item 1A. Risk Factors” in Part II of our Quarterly Reports on Form 10-Q and in our other filings with the SEC. Undue reliance should not be placed on the forward-looking statements in this press release in making an investment decision, which are based on information available to us on the date hereof. We undertake no duty to update this information unless required by law.

Here’s a link to and a citation for the most recent paper,

Beyond-classical computation in quantum simulation by Andrew D. King , Alberto Nocera, Marek M. Rams, Jacek Dziarmaga, Roeland Wiersema, William Bernoudy, Jack Raymond, Nitin Kaushal, Niclas Heinsdorf, Richard Harris, Kelly Boothby, Fabio Altomare, Mohsen Asad, Andrew J. Berkley, Martin Boschnak, Kevin Chern, Holly Christiani, Samantha Cibere, Jake Connor, Martin H. Dehn, Rahul Deshpande, Sara Ejtemaee, Pau Farre, Kelsey Hamer, Emile Hoskinson, Shuiyuan Huang, Mark W. Johnson, Samuel Kortas, Eric Ladizinsky, Trevor Lanting, Tony Lai, Ryan Li, Allison J. R. MacDonald, Gaelen Marsden, Catherine C. McGeoch, Reza Molavi, Travis Oh, Richard Neufeld, Mana Norouzpour, Joel Pasvolsky, Patrick Poitras, Gabriel Poulin-Lamarre, Thomas Prescott, Mauricio Reis, Chris Rich, Mohammad Samani, Benjamin Sheldan, Anatoly Smirnov, Edward Sterpka, Berta Trullas Clavera, Nicholas Tsai, Mark Volkmann, Alexander M. Whiticar, Jed D. Whittaker, Warren Wilkinson, Jason Yao, T.J. Yi, Anders W. Sandvik, Gonzalo Alvarez, Roger G. Melko, Juan Carrasquilla, Marcel Franz, and Mohammad H. Amin. Science 12 Mar 2025 First Release DOI: 10.1126/science.ado6285

This paper appears to be open access.Note: I usually tag all of the authors but not this time either.

A controversy of sorts

Madison McLauchlan’s March 19, 2025 article for Betakit (website for Canadian Startup News & Tech Innovation), Note: Links have been removed,

Canadian-born company D-Wave Quantum Systems said it achieved “quantum supremacy” last week after publishing what it calls a groundbreaking paper in the prestigious journal Science. Despite the lofty term, Canadian experts say supremacy is not the be-all, end-all of quantum innovation. 

D-Wave, which has labs in Palo Alto, Calif., and Burnaby, BC, claimed in a statement that it has shown “the world’s first and only demonstration of quantum computational supremacy on a useful, real-world problem.”

Coined in the early 2010s by physicist John Preskill, quantum supremacy is the ability of a quantum computing system to solve a problem no classical computer can in a feasible amount of time. The metric makes no mention of whether the problem needs to be useful or relevant to real life. Google researchers published a paper in Nature in 2019 claiming they cleared that bar with the Sycamore quantum processor. Researchers at the University of Science and Technology in China claimed they demonstrated quantum supremacy several times. 

D-Wave’s attempt differs in that its researchers aimed to solve a real-world materials-simulation problem with quantum computing—one the company claims would be nearly impossible for a traditional computer to solve in a reasonable amount of time. D-Wave used an annealing designed to solve optimization problems. The problem is represented like an energy space, where the “lowest energy state” corresponds to the solution. 

While exciting, quantum supremacy is just one metric among several that mark the progress toward widely useful quantum computers, industry experts told BetaKit. 

“It is a very important and mostly academic metric, but certainly not the most important in the grand scheme of things, as it doesn’t take into account the usefulness of the algorithm,” said Martin Laforest, managing partner at Quantacet, a specialized venture capital fund for quantum startups. 

He added that Google and Xanadu’s [Xanadu Quantum Technologies based in Toronto, Canada] past claims to quantum supremacy were “extraordinary pieces of work, but didn’t unlock practicality.” 

Laforest, along with executives at Canadian quantum startups Nord Quantique and Photonic, say that the milestones of ‘quantum utility’ or ‘quantum advantage’ may be more important than supremacy. 

According to Quantum computing company Quera [QuEra?], quantum advantage is the demonstration of a quantum algorithm solving a real-world problem on a quantum computer faster than any classical algorithm running on any classical computer. On the other hand, quantum utility, according to IBM, refers to when a quantum computer is able to perform reliable computations at a scale beyond brute-force classical computing methods that provide exact solutions to computational problems. 

Error correction hasn’t traditionally been considered a requirement for quantum supremacy, but Laforest told BetaKit the term is “an ever-moving target, constantly challenged by advances in classical algorithms.” He added: “In my opinion, some level of supremacy or utility may be possible in niche areas without error correction, but true disruption requires it.”

Paul Terry, CEO of Vancouver-based Photonic, thinks that though D-Wave’s claim to quantum supremacy shows “continued progress to real value,” scalability is the industry’s biggest hurdle to overcome.

But as with many milestone claims in the quantum space, D-Wave’s latest innovation has been met with scrutiny from industry competitors and researchers on the breakthrough’s significance, claiming that classical computers have achieved similar results. Laforest echoed this sentiment.

“Personally, I wouldn’t say it’s an unequivocal demonstration of supremacy, but it is a damn nice experiment that once again shows the murky zone between traditional computing and early quantum advantage,” Laforest said.

Originally founded out of the University of British Columbia, D-Wave went public on the New York Stock Exchange just over two years ago through a merger with a special-purpose acquisition company in 2022. D-Wave became a Delaware-domiciled corporation as part of the deal.

Earlier this year, D-Wave’s stock price dropped after Nvidia CEO Jensen Huang publicly stated that he estimated that useful quantum computers were more than 15 years away. D-Wave’s stock price, which had been struggling, has seen a considerable bump in recent months alongside a broader boost in the quantum market. The price popped after its most recent earnings, shared right after its quantum supremacy announcement. 

The beat goes on

Some of this is standard in science. There’s always a debate over big claims and it’s not unusual for people to get over excited and have to make a retraction. Scientists are people too. That said, there’s a lot of money on the line and that appears to be making situation even more volatile than usual.

That last paragraph was completed on the morning of March 21, 2025 and later that afternoon I came across this March 21, 2025 article by Michael Grothaus for Fast Company, Note: Links have been removed,

Quantum computing stocks got pummeled yesterday, with the four most prominent public quantum computing companies—IonQ, Rigetti Computing, Quantum Computing Inc., and D-Wave Quantum Inc.—falling anywhere from over 9% to over 18%. The reason? A lot of it may have to do with AI chip giant Nvidia. Again.

Stocks crash yesterday on Nvidia quantum news

Yesterday was a bit of a bloodbath on the stock market for the four most prominent publicly traded quantum computing companies. …

All four of these quantum computing stocks [IonQ, Inc.; Rigetti Computing, Inc.; Quantum Computing Inc.; D-Wave Quantum Inc.] tumbled on the day that AI chip giant Nvidia kicked off its two-day Quantum Day event. In a blog post from January 14 announcing Quantum Day, Nvidia said the event “brings together leading experts for a comprehensive and balanced perspective on what businesses should expect from quantum computing in the coming decades — mapping the path toward useful quantum applications.”

Besides bringing quantum experts together, the AI behemoth also announced that it will be launching a new quantum computing research center in Boston.

Called the NVIDIA Accelerated Quantum Research Center (NVAQC), the new research lab “will help solve quantum computing’s most challenging problems, ranging from qubit noise to transforming experimental quantum processors into practical devices,” the company said in a press release.

The NVAQC’s location in Boston means it will be near both Harvard University and the Massachusetts Institute of Technology (MIT). 

Before Nvidia’s announcement yesterday, IonQ, Rigetti, D-Wave, and Quantum Computing Inc. were the leaders in the nascent field of quantum computing. And while they still are right now (Nvidia’s quantum research lab hasn’t been built yet), the fear is that Nvidia could use its deep pockets to quickly buy its way into a leadership spot in the field. With its $2.9 trillion market cap, the company can easily afford to throw billions of research dollars into quantum computing.

As noted by the Motley Fool, the location of the NVIDIA Accelerated Quantum Research Center in Boston will also allow Nvidia to more easily tap into top quantum talent from Harvard and MIT—talent that may have otherwise gone to IonQ, Rigetti, D-Wave, and Quantum Computing Inc.

Nvidia’s announcement is a massive about-face from the company in regard to how it views quantum computing. It’s also the second time that Nvidia has caused quantum stocks to crash this year. Back in January, shares in prominent quantum computing companies fell after Huang said that practical use of quantum computing was decades away.

Those comments were something quantum computing company CEOs like D-Wave’s Alan Baratz took issue with. “It’s an egregious error on Mr. Huang’s part,” Bartaz told Fast Company at the time. “We’re not decades away from commercial quantum computers. They exist. There are companies that are using our quantum computer today.”

According to Investor’s Business Daily, Huang reportedly got the idea for Nvidia’s Quantum Day event after the blowback to his comments, inviting quantum computing executives to the event to explain why he was incorrect about quantum computing.

The word is volatile.

DeepSeek, a Chinese rival to OpenAI and other US AI companies

There’s been quite the kerfuffle over DeepSeek during the last few days. This January 27, 2025 article by Alexandra Mae Jones for the Canadian Broadcasting Corporation (CBC) news only was my introduction to DeepSeek AI, Note: A link has been removed,

There’s a new player in AI on the world stage: DeepSeek, a Chinese startup that’s throwing tech valuations into chaos and challenging U.S. dominance in the field with an open-source model that they say they developed for a fraction of the cost of competitors.

DeepSeek’s free AI assistant — which by Monday [January 27, 20¸25] had overtaken rival ChatGPT to become the top-rated free application on Apple’s App Store in the United States — offers the prospect of a viable, cheaper AI alternative, raising questions on the heavy spending by U.S. companies such as Apple and Microsoft, amid a growing investor push for returns.

U.S. stocks dropped sharply on Monday [January 27, 2025], as the surging popularity of DeepSeek sparked a sell-off in U.S. chipmakers.

“[DeepSeek] performs as well as the leading models in Silicon Valley and in some cases, according to their claims, even better,” Sheldon Fernandez, co-founder of DarwinAI, told CBC News. “But they did it with a fractional amount of the resources is really what is turning heads in our industry.”

What is DeepSeek?

Little is known about the small Hangzhou startup behind DeepSeek, which was founded out of a hedge fund in 2023, but largely develops open-source AI models. 

Its researchers wrote in a paper last month that the DeepSeek-V3 model, launched on Jan. 10 [2025], cost less than $6 million US to develop and uses less data than competitors, running counter to the assumption that AI development will eat up increasing amounts of money and energy. 

Some analysts are skeptical about DeepSeek’s $6 million claim, pointing out that this figure only covers computing power. But Fernandez said that even if you triple DeepSeek’s cost estimates, it would still cost significantly less than its competitors. 

The open source release of DeepSeek-R1, which came out on Jan. 20 [2025] and uses DeepSeek-V3 as its base, also means that developers and researchers can look at its inner workings, run it on their own infrastructure and build on it, although its training data has not been made available. 

“Instead of paying Open $20 a month or $200 a month for the latest advanced versions of these models, [people] can really get these types of features for free. And so it really upends a lot of the business model that a lot of these companies were relying on to justify their very high valuations.”

A key difference between DeepSeek’s AI assistant, R1, and other chatbots like OpenAI’s ChatGPT is that DeepSeek lays out its reasoning when it answers prompts and questions, something developers are excited about. 

“The dealbreaker is the access to the raw thinking steps,” Elvis Saravia, an AI researcher and co-founder of the U.K.-based AI consulting firm DAIR.AI, wrote on X, adding that the response quality was “comparable” to OpenAI’s latest reasoning model, o1.

U.S. dominance in AI challenged

One of the reasons DeepSeek is making headlines is because its development occurred despite U.S. actions to keep Americans at the top of AI development. In 2022, the U.S. curbed exports of computer chips to China, hampering their advanced supercomputing development.

The latest AI models from DeepSeek are widely seen to be competitive with those of OpenAI and Meta, which rely on high-end computer chips and extensive computing power.

Christine Mui in a January 27, 2025 article for Politico notes the stock ‘crash’ taking place while focusing on the US policy implications, Note: Links set by Politico have been removed while I have added one link

A little-known Chinese artificial intelligence startup shook the tech world this weekend by releasing an OpenAI-like assistant, which shot to the No.1 ranking on Apple’s app store and caused American tech giants’ stocks to tumble.

From Washington’s perspective, the news raised an immediate policy alarm: It happened despite consistent, bipartisan efforts to stifle AI progress in China.

In tech terms, what freaked everyone out about DeepSeek’s R1 model is that it replicated — and in some cases, surpassed — the performance of OpenAI’s cutting-edge o1 product across a host of performance benchmarks, at a tiny fraction of the cost.

The business takeaway was straightforward. DeepSeek’s success shows that American companies might not need to spend nearly as much as expected to develop AI models. That both intrigues and worries investors and tech leaders.

The policy implications, though, are more complex. Washington’s rampant anxiety about beating China has led to policies that the industry has very mixed feelings about.

On one hand, most tech firms hate the export controls that stop them from selling as much to the world’s second-largest economy, and force them to develop new products if they want to do business with China. If DeepSeek shows those rules are pointless, many would be delighted to see them go away.

On the other hand, anti-China, protectionist sentiment has encouraged Washington to embrace a whole host of industry wishlist items, from a lighter-touch approach to AI rules to streamlined permitting for related construction projects. Does DeepSeek mean those, too, are failing? Or does it trigger a doubling-down?

DeepSeek’s success truly seems to challenge the belief that the future of American AI demands ever more chips and power. That complicates Trump’s interest in rapidly building out that kind of infrastructure in the U.S.

Why pour $500 billion into the Trump-endorsed “Stargate” mega project [announced by Trump on January 21, 2025] — and why would the market reward companies like Meta that spend $65 billion in just one year on AI — if DeepSeek claims it only took $5.6 million and second-tier Nvidia chips to train one of its latest models? (U.S. industry insiders dispute the startup’s figures and claim they don’t tell the full story, but even at 100 times that cost, it would be a bargain.)

Tech companies, of course, love the recent bloom of federal support, and it’s unlikely they’ll drop their push for more federal investment to match anytime soon. Marc Andreessen, a venture capitalist and Trump ally, argued today that DeepSeek should be seen as “AI’s Sputnik moment,” one that raises the stakes for the global competition.

That would strengthen the case that some American AI companies have been pressing for the new administration to invest government resources into AI infrastructure (OpenAI), tighten restrictions on China (Anthropic) and ease up on regulations to ensure their developers build “artificial general intelligence” before their geopolitical rivals.

The British Broadcasting Corporation’s (BBC) Peter Hoskins & Imran Rahman-Jones provided a European perspective and some additional information in their January 27, 2025 article for BBC news online, Note: Links have been removed,

US tech giant Nvidia lost over a sixth of its value after the surging popularity of a Chinese artificial intelligence (AI) app spooked investors in the US and Europe.

DeepSeek, a Chinese AI chatbot reportedly made at a fraction of the cost of its rivals, launched last week but has already become the most downloaded free app in the US.

AI chip giant Nvidia and other tech firms connected to AI, including Microsoft and Google, saw their values tumble on Monday [January 27, 2025] in the wake of DeepSeek’s sudden rise.

In a separate development, DeepSeek said on Monday [January 27, 2025] it will temporarily limit registrations because of “large-scale malicious attacks” on its software.

The DeepSeek chatbot was reportedly developed for a fraction of the cost of its rivals, raising questions about the future of America’s AI dominance and the scale of investments US firms are planning.

DeepSeek is powered by the open source DeepSeek-V3 model, which its researchers claim was trained for around $6m – significantly less than the billions spent by rivals.

But this claim has been disputed by others in AI.

The researchers say they use already existing technology, as well as open source code – software that can be used, modified or distributed by anybody free of charge.

DeepSeek’s emergence comes as the US is restricting the sale of the advanced chip technology that powers AI to China.

To continue their work without steady supplies of imported advanced chips, Chinese AI developers have shared their work with each other and experimented with new approaches to the technology.

This has resulted in AI models that require far less computing power than before.

It also means that they cost a lot less than previously thought possible, which has the potential to upend the industry.

After DeepSeek-R1 was launched earlier this month, the company boasted of “performance on par with” one of OpenAI’s latest models when used for tasks such as maths, coding and natural language reasoning.

In Europe, Dutch chip equipment maker ASML ended Monday’s trading with its share price down by more than 7% while shares in Siemens Energy, which makes hardware related to AI, had plunged by a fifth.

“This idea of a low-cost Chinese version hasn’t necessarily been forefront, so it’s taken the market a little bit by surprise,” said Fiona Cincotta, senior market analyst at City Index.

“So, if you suddenly get this low-cost AI model, then that’s going to raise concerns over the profits of rivals, particularly given the amount that they’ve already invested in more expensive AI infrastructure.”

Singapore-based technology equity adviser Vey-Sern Ling told the BBC it could “potentially derail the investment case for the entire AI supply chain”.

Who founded DeepSeek?

The company was founded in 2023 by Liang Wenfeng in Hangzhou, a city in southeastern China.

The 40-year-old, an information and electronic engineering graduate, also founded the hedge fund that backed DeepSeek.

He reportedly built up a store of Nvidia A100 chips, now banned from export to China.

Experts believe this collection – which some estimates put at 50,000 – led him to launch DeepSeek, by pairing these chips with cheaper, lower-end ones that are still available to import.

Mr Liang was recently seen at a meeting between industry experts and the Chinese premier Li Qiang.

In a July 2024 interview with The China Academy, Mr Liang said he was surprised by the reaction to the previous version of his AI model.

“We didn’t expect pricing to be such a sensitive issue,” he said.

“We were simply following our own pace, calculating costs, and setting prices accordingly.”

A January 28, 2025 article by Daria Solovieva for salon.com covers much the same territory as the others and includes a few detail about security issues,

The pace at which U.S. consumers have embraced DeepSeek is raising national security concerns similar to those surrounding TikTok, the social media platform that faces a ban unless it is sold to a non-Chinese company.

The U.S. Supreme Court this month upheld a federal law that requires TikTok’s sale. The Court sided with the U.S. government’s argument that the app can collect and track data on its 170 million American users. President Donald Trump has paused enforcement of the ban until April to try to negotiate a deal.

But “the threat posed by DeepSeek is more direct and acute than TikTok,” Luke de Pulford, co-founder and executive director of non-profit Inter-Parliamentary Alliance on China, told Salon.

DeepSeek is a fully Chinese company and is subject to Communist Party control, unlike TikTok which positions itself as independent from parent company ByteDance, he said. 

“DeepSeek logs your keystrokes, device data, location and so much other information and stores it all in China,” de Pulford said. “So you’ll never know if the Chinese state has been crunching your data to gain strategic advantage, and DeepSeek would be breaking the law if they told you.”  

I wonder if other AI companies in other countries also log keystrokes, etc. Is it theoretically possible that one of those governments or their government agencies could gain access to your data? It’s obvious in China but people in other countries may have the issues.

Censorship: DeepSeek and ChatGPT

Anis Heydari’s January 28, 2025 article for CBC news online reveals some surprising results from a head to head comparison between DeepSeek and ChatGPT,

The Chinese-made AI chatbot DeepSeek may not always answer some questions about topics that are often censored by Beijing, according to tests run by CBC News and The Associated Press, and is providing different information than its U.S.-owned competitor ChatGPT.

The new, free chatbot has sparked discussions about the competition between China and the U.S. in AI development, with many users flocking to test it. 

But experts warn users should be careful with what information they provide to such software products.

It is also “a little bit surprising,” according to one researcher, that topics which are often censored within China are seemingly also being restricted elsewhere.

“A lot of services will differentiate based on where the user is coming from when deciding to deploy censorship or not,” said Jeffrey Knockel, who researches software censorship and surveillance at the Citizen Lab at the University of Toronto’s Munk School of Global Affairs & Public Policy.

“With this one, it just seems to be censoring everyone.”

Both CBC News and The Associated Press posed questions to DeepSeek and OpenAI’s ChatGPT, with mixed and differing results.

For example, DeepSeek seemed to indicate an inability to answer fully when asked “What does Winnie the Pooh mean in China?” For many Chinese people, the Winnie the Pooh character is used as a playful taunt of President Xi Jinping, and social media searches about that character were previously, briefly banned in China. 

DeepSeek said the bear is a beloved cartoon character that is adored by countless children and families in China, symbolizing joy and friendship.

Then, abruptly, it added the Chinese government is “dedicated to providing a wholesome cyberspace for its citizens,” and that all online content is managed under Chinese laws and socialist core values, with the aim of protecting national security and social stability.

CBC News was unable to produce this response. DeepSeek instead said “some internet users have drawn comparisons between Winnie the Pooh and Chinese leaders, leading to increased scrutiny and restrictions on the character’s imagery in certain contexts,” when asked the same question on an iOS app on a CBC device in Canada.

Asked if Taiwan is a part of China — another touchy subject — it [DeepSeek] began by saying the island’s status is a “complex and sensitive issue in international relations,” adding that China claims Taiwan, but that the island itself operates as a “separate and self-governing entity” which many people consider to be a sovereign nation.

But as that answer was being typed out, for both CBC and the AP, it vanished and was replaced with: “Sorry, that’s beyond my current scope. Let’s talk about something else.”

… Brent Arnold, a data breach lawyer in Toronto, says there are concerns about DeepSeek, which explicitly says in its privacy policy that the information it collects is stored on servers in China.

That information can include the type of device used, user “keystroke patterns,” and even “activities on other websites and apps or in stores, including the products or services you purchased, online or in person” depending on whether advertising services have shared those with DeepSeek.

“The difference between this and another AI company having this is now, the Chinese government also has it,” said Arnold.

While much, if not all, of the data DeepSeek collects is the same as that of U.S.-based companies such as Meta or Google, Arnold points out that — for now — the U.S. has checks and balances if governments want to obtain that information.

“With respect to America, we assume the government operates in good faith if they’re investigating and asking for information, they’ve got a legitimate basis for doing so,” he said. 

Right now, Arnold says it’s not accurate to compare Chinese and U.S. authorities in terms of their ability to take personal information. But that could change.

“I would say it’s a false equivalency now. But in the months and years to come, we might start to say you don’t see a whole lot of difference in what one government or another is doing,” he said.

Graham Fraser’s January 28, 2025 article comparing DeepSeek to the others (OpenAI’s ChatGPT and Google’s Gemini) for BBC news online took a different approach,

Writing Assistance

When you ask ChatGPT what the most popular reasons to use ChatGPT are, it says that assisting people to write is one of them.

From gathering and summarising information in a helpful format to even writing blog posts on a topic, ChatGPT has become an AI companion for many across different workplaces.

As a proud Scottish football [soccer] fan, I asked ChatGPT and DeepSeek to summarise the best Scottish football players ever, before asking the chatbots to “draft a blog post summarising the best Scottish football players in history”.

DeepSeek responded in seconds, with a top ten list – Kenny Dalglish of Liverpool and Celtic was number one. It helpfully summarised which position the players played in, their clubs, and a brief list of their achievements.

DeepSeek also detailed two non-Scottish players – Rangers legend Brian Laudrup, who is Danish, and Celtic hero Henrik Larsson. For the latter, it added “although Swedish, Larsson is often included in discussions of Scottish football legends due to his impact at Celtic”.

For its subsequent blog post, it did go into detail of Laudrup’s nationality before giving a succinct account of the careers of the players.

ChatGPT’s answer to the same question contained many of the same names, with “King Kenny” once again at the top of the list.

Its detailed blog post briefly and accurately went into the careers of all the players.

It concluded: “While the game has changed over the decades, the impact of these Scottish greats remains timeless.” Indeed.

For this fun test, DeepSeek was certainly comparable to its best-known US competitor.

Coding

Brainstorming ideas

Learning and research

Steaming ahead

The tasks I set the chatbots were simple but they point to something much more significant – the winner of the so-called AI race is far from decided.

For all the vast resources US firms have poured into the tech, their Chinese rival has shown their achievements can be emulated.

Reception from the science community

Days before the news outlets discovered DeepSeek, the company published a paper about its Large Language Models (LLMs) and its new chatbot on arXiv. Here’s a little more information,

DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

[over 100 authors are listed]

We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.

Cite as: arXiv:2501.12948 [cs.CL]
(or arXiv:2501.12948v1 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.2501.12948

Submission history

From: Wenfeng Liang [view email]
[v1] Wed, 22 Jan 2025 15:19:35 UTC (928 KB)

You can also find a PDF version of the paper here or another online version here at Hugging Face.

As for the science community’s response, the title of Elizabeth Gibney’s January 23, 2025 article “China’s cheap, open AI model DeepSeek thrills scientists” for Nature says it all, Note: Links have been removed,

A Chinese-built large language model called DeepSeek-R1 is thrilling scientists as an affordable and open rival to ‘reasoning’ models such as OpenAI’s o1.

These models generate responses step-by-step, in a process analogous to human reasoning. This makes them more adept than earlier language models at solving scientific problems and could make them useful in research. Initial tests of R1, released on 20 January, show that its performance on certain tasks in chemistry, mathematics and coding is on par with that of o1 — which wowed researchers when it was released by OpenAI in September.

“This is wild and totally unexpected,” Elvis Saravia, an AI researcher and co-founder of the UK-based AI consulting firm DAIR.AI, wrote on X.

R1 stands out for another reason. DeepSeek, the start-up in Hangzhou that built the model, has released it as ‘open-weight’, meaning that researchers can study and build on the algorithm. Published under an MIT licence, the model can be freely reused but is not considered fully open source, because its training data has not been made available.

“The openness of DeepSeek is quite remarkable,” says Mario Krenn, leader of the Artificial Scientist Lab at the Max Planck Institute for the Science of Light in Erlangen, Germany. By comparison, o1 and other models built by OpenAI in San Francisco, California, including its latest effort o3 are “essentially black boxes”, he says.

DeepSeek hasn’t released the full cost of training R1, but it is charging people using its interface around one-thirtieth of what o1 costs to run. The firm has also created mini ‘distilled’ versions of R1 to allow researchers with limited computing power to play with the model. An “experiment that cost more than £300 with o1, cost less than $10 with R1,” says Krenn. “This is a dramatic difference which will certainly play a role its future adoption.”

The kerfuffle has died down for now.

Artificial intelligence (AI) company (in Montréal, Canada) attracts $135M in funding from Microsoft, Intel, Nvidia and others

It seems there’s a push on to establish Canada as a centre for artificial intelligence research and, if the federal and provincial governments have their way, for commercialization of said research. As always, there seems to be a bit of competition between Toronto (Ontario) and Montréal (Québec) as to which will be the dominant hub for the Canadian effort if one is to take Braga’s word for the situation.

In any event, Toronto seemed to have a mild advantage over Montréal initially with the 2017 Canadian federal government  budget announcement that the Canadian Institute for Advanced Research (CIFAR), based in Toronto, would launch a Pan-Canadian Artificial Intelligence Strategy and with an announcement from the University of Toronto shortly after (from my March 31, 2017 posting),

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

However, Montréal and the province of Québec are no slouches when it comes to supporting to technology. From a June 14, 2017 article by Matthew Braga for CBC (Canadian Broadcasting Corporation) news online (Note: Links have been removed),

One of the most promising new hubs for artificial intelligence research in Canada is going international, thanks to a $135 million investment with contributions from some of the biggest names in tech.

The company, Montreal-based Element AI, was founded last October [2016] to help companies that might not have much experience in artificial intelligence start using the technology to change the way they do business.

It’s equal parts general research lab and startup incubator, with employees working to develop new and improved techniques in artificial intelligence that might not be fully realized for years, while also commercializing products and services that can be sold to clients today.

It was co-founded by Yoshua Bengio — one of the pioneers of a type of AI research called machine learning — along with entrepreneurs Jean-François Gagné and Nicolas Chapados, and the Canadian venture capital fund Real Ventures.

In an interview, Bengio and Gagné said the money from the company’s funding round will be used to hire 250 new employees by next January. A hundred will be based in Montreal, but an additional 100 employees will be hired for a new office in Toronto, and the remaining 50 for an Element AI office in Asia — its first international outpost.

They will join more than 100 employees who work for Element AI today, having left jobs at Amazon, Uber and Google, among others, to work at the company’s headquarters in Montreal.

The expansion is a big vote of confidence in Element AI’s strategy from some of the world’s biggest technology companies. Microsoft, Intel and Nvidia all contributed to the round, and each is a key player in AI research and development.

The company has some not unexpected plans and partners (from the Braga, article, Note: A link has been removed),

The Series A round was led by Data Collective, a Silicon Valley-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, and Real Ventures.

What will it help the company do? Scale, its founders say.

“We’re looking at domain experts, artificial intelligence experts,” Gagné said. “We already have quite a few, but we’re looking at people that are at the top of their game in their domains.

“And at this point, it’s no longer just pure artificial intelligence, but people who understand, extremely well, robotics, industrial manufacturing, cybersecurity, and financial services in general, which are all the areas we’re going after.”

Gagné says that Element AI has already delivered 10 projects to clients in those areas, and have many more in development. In one case, Element AI has been helping a Japanese semiconductor company better analyze the data collected by the assembly robots on its factory floor, in a bid to reduce manufacturing errors and improve the quality of the company’s products.

There’s more to investment in Québec’s AI sector than Element AI (from the Braga article; Note: Links have been removed),

Element AI isn’t the only organization in Canada that investors are interested in.

In September, the Canadian government announced $213 million in funding for a handful of Montreal universities, while both Google and Microsoft announced expansions of their Montreal AI research groups in recent months alongside investments in local initiatives. The province of Quebec has pledged $100 million for AI initiatives by 2022.

Braga goes on to note some other initiatives but at that point the article’s focus is exclusively Toronto.

For more insight into the AI situation in Québec, there’s Dan Delmar’s May 23, 2017 article for the Montreal Express (Note: Links have been removed),

Advocating for massive government spending with little restraint admittedly deviates from the tenor of these columns, but the AI business is unlike any other before it. [emphasis misn] Having leaders acting as fervent advocates for the industry is crucial; resisting the coming technological tide is, as the Borg would say, futile.

The roughly 250 AI researchers who call Montreal home are not simply part of a niche industry. Quebec’s francophone character and Montreal’s multilingual citizenry are certainly factors favouring the development of language technology, but there’s ample opportunity for more ambitious endeavours with broader applications.

AI isn’t simply a technological breakthrough; it is the technological revolution. [emphasis mine] In the coming decades, modern computing will transform all industries, eliminating human inefficiencies and maximizing opportunities for innovation and growth — regardless of the ethical dilemmas that will inevitably arise.

“By 2020, we’ll have computers that are powerful enough to simulate the human brain,” said (in 2009) futurist Ray Kurzweil, author of The Singularity Is Near, a seminal 2006 book that has inspired a generation of AI technologists. Kurzweil’s projections are not science fiction but perhaps conservative, as some forms of AI already effectively replace many human cognitive functions. “By 2045, we’ll have expanded the intelligence of our human-machine civilization a billion-fold. That will be the singularity.”

The singularity concept, borrowed from physicists describing event horizons bordering matter-swallowing black holes in the cosmos, is the point of no return where human and machine intelligence will have completed their convergence. That’s when the machines “take over,” so to speak, and accelerate the development of civilization beyond traditional human understanding and capability.

The claims I’ve highlighted in Delmar’s article have been made before for other technologies, “xxx is like no other business before’ and “it is a technological revolution.”  Also if you keep scrolling down to the bottom of the article, you’ll find Delmar is a ‘public relations consultant’ which, if you look at his LinkedIn profile, you’ll find means he’s a managing partner in a PR firm known as Provocateur.

Bertrand Marotte’s May 20, 2017 article for the Montreal Gazette offers less hyperbole along with additional detail about the Montréal scene (Note: Links have been removed),

It might seem like an ambitious goal, but key players in Montreal’s rapidly growing artificial-intelligence sector are intent on transforming the city into a Silicon Valley of AI.

Certainly, the flurry of activity these days indicates that AI in the city is on a roll. Impressive amounts of cash have been flowing into academia, public-private partnerships, research labs and startups active in AI in the Montreal area.

…, researchers at Microsoft Corp. have successfully developed a computing system able to decipher conversational speech as accurately as humans do. The technology makes the same, or fewer, errors than professional transcribers and could be a huge boon to major users of transcription services like law firms and the courts.

Setting the goal of attaining the critical mass of a Silicon Valley is “a nice point of reference,” said tech entrepreneur Jean-François Gagné, co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched last year.

The idea is to create a “fluid, dynamic ecosystem” in Montreal where AI research, startup, investment and commercialization activities all mesh productively together, said Gagné, who founded Element with researcher Nicolas Chapados and Université de Montréal deep learning pioneer Yoshua Bengio.

“Artificial intelligence is seen now as a strategic asset to governments and to corporations. The fight for resources is global,” he said.

The rise of Montreal — and rival Toronto — as AI hubs owes a lot to provincial and federal government funding.

Ottawa promised $213 million last September to fund AI and big data research at four Montreal post-secondary institutions. Quebec has earmarked $100 million over the next five years for the development of an AI “super-cluster” in the Montreal region.

The provincial government also created a 12-member blue-chip committee to develop a strategic plan to make Quebec an AI hub, co-chaired by Claridge Investments Ltd. CEO Pierre Boivin and Université de Montréal rector Guy Breton.

But private-sector money has also been flowing in, particularly from some of the established tech giants competing in an intense AI race for innovative breakthroughs and the best brains in the business.

Montreal’s rich talent pool is a major reason Waterloo, Ont.-based language-recognition startup Maluuba decided to open a research lab in the city, said the company’s vice-president of product development, Mohamed Musbah.

“It’s been incredible so far. The work being done in this space is putting Montreal on a pedestal around the world,” he said.

Microsoft struck a deal this year to acquire Maluuba, which is working to crack one of the holy grails of deep learning: teaching machines to read like the human brain does. Among the company’s software developments are voice assistants for smartphones.

Maluuba has also partnered with an undisclosed auto manufacturer to develop speech recognition applications for vehicles. Voice recognition applied to cars can include such things as asking for a weather report or making remote requests for the vehicle to unlock itself.

Marotte’s Twitter profile describes him as a freelance writer, editor, and translator.

Canon-Molecular Imprints deal and its impact on shrinking chips (integrated circuits)

There’s quite an interesting April 20, 2014 essay on Nanotechnology Now which provides some insight into the nanoimprinting market. I recommend reading it but for anyone who is not intimately familiar with the scene, here are a few excerpts along with my attempts to decode this insider’s (from Martini Tech) view,

About two months ago, important news shook the small but lively Japanese nanoimprint community: Canon has decided to acquire, making it a wholly-owned subsidiary, Texas-based Molecular Imprints, a strong player in the nanotechnology industry and one of the main makers of nanoimprint devices such as the Imprio 450 and other models.

So, Canon, a Japanese company, has made a move into the nanoimpriting sector by purchasing Molecular Imprints, a US company based in Texas, outright.

This next part concerns the expiration of Moore’s Law (i.e., every 18 months computer chips get smaller and faster) and is why the major chip makers are searching for new solutions as per the fifth paragraph in this excerpt,

Molecular Imprints` devices are aimed at the IC [integrated circuits, aka chips, I think] patterning market and not just at the relatively smaller applications market to which nanoimprint is usually confined: patterning of bio culture substrates, thin film applications for the solar industry, anti-reflection films for smartphone and LED TV screens, patterning of surfaces for microfluidics among others.

While each one of the markets listed above has the potential of explosive growth in the medium-long term future, at the moment none of them is worth more than a few percentage points, at best, of the IC patterning market.

The mainstream technology behind IC patterning is still optical stepper lithography and the situation is not likely to change in the near term future.

However, optical lithography has its limitations, the main challenge to its 40-year dominance not coming only from technological and engineering issues, but mostly from economical ones.

While from a strictly technological point of view it may still be possible for the major players in the chip industry (Intel, GF, TSMC, Nvidia among others) to go ahead with optical steppers and reach the 5nm node using multi-patterning and immersion, the cost increases associated with each die shrink are becoming staggeringly high.

A top-of-the-notch stepper in the early 90s could have been bought for a few millions of dollars, now the price has increased to some tens of millions for the top machines

The essay describes the market impact this acquisition may have for Canon,

Molecular Imprints has been a company on the forefront of commercialization of nanoimprint-based solutions for IC manufacturing, but so far their solutions have yet to become a viable alternative HVM IC manufacturing market.

The main stumbling blocks for IC patterning using nanoimprint technology are: the occurrence of defects on the mask that inevitably replicates them on each substrate and the lack of alignment precision between the mold and the substrate needed to pattern multi-layered structures.

Therefore, applications for nanoimprint have been limited to markets where no non-periodical structure patterning is needed and where one-layered patterning is sufficient.

But the big market where everyone is aiming for is, of course, IC patterning and this is where much of the R&D effort goes.

While logic patterning with nanoimprint may still be years away, simple patterning of NAND structures may be feasible in the near future, and the purchase of Molecular Imprints by Canon is a step in this direction

Patterning of NAND structures may still require multi-layered structures, but the alignment precision needed is considerably lower than logic.

Moreover, NAND requirements for defectivity are more relaxed than for logic due to the inherent redundancy of the design, therefore, NAND manufacturing is the natural first step for nanoimprint in the IC manufacturing market and, if successful, it may open a whole new range of opportunities for the whole sector.

Assuming I’ve read the rest of this essay rightly, here’s my summary: there are a number of techniques being employed to make chips smaller and more efficient. Canon has purchased a company that is versed in a technique that creates NAND (you can find definitions here) structures in the hope that this technique can be commercialized so that Canon becomes dominant in the sector because (1) they got there first and/or because (2) NAND manufacturing becomes a clear leader, crushing competition from other technologies. This could cover short-term goals and, I imagine Canon hopes, long-term goals.

It was a real treat coming across this essay as it’s an insider’s view. So, thank you to the folks at Martini Tech who wrote this. You can find Molecular Imprints here.