Tag Archives: Michael Grothaus

Microsoft, D-Wave Systems, quantum computing, and quantum supremacy?

Before diving into some of the latest quantum computing doings, here’s why quantum computing is so highly prized and chased after, from the Quantum supremacy Wikipedia entry, Note: Links have been removed,

In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum computer can solve a problem that no classical computer can solve in any feasible amount of time, irrespective of the usefulness of the problem.[1][2][3] The term was coined by John Preskill in 2011,[1][4] but the concept dates to Yuri Manin’s 1980[5] and Richard Feynman’s 1981[6] proposals of quantum computing.

Quantum supremacy and quantum advantage have been mentioned a few times here over the years. You can check my March 6, 2020 posting for when researchers from the University of California at Santa Barbara claimed quantum supremacy and my July 31, 2023 posting for when D-Wave Systems claimed a quantum advantage on optimization problems. I’d understood quantum supremacy and quantum advantage to be synonymous but according the article in Betakit (keep scrolling down to the D-Wave subhead and then, to ‘A controversy of sorts’ subhead in this posting), that’s not so.

The latest news on the quantum front comes from Microsoft (February 2025) and D-Wave systems (March 2025).

Microsoft claims a new state of matter for breakthroughs in quantum computing

Here’s the February 19, 2025 news announcement from Microsoft’s Chetan Nayak, Technical Fellow and Corporate Vice President of Quantum Hardware, Note: Links have been removed,

Quantum computers promise to transform science and society—but only after they achieve the scale that once seemed distant and elusive, and their reliability is ensured by quantum error correction. Today, we’re announcing rapid advancements on the path to useful quantum computing:

  • Majorana 1: the world’s first Quantum Processing Unit (QPU) powered by a Topological Core, designed to scale to a million qubits on a single chip.
  • A hardware-protected topological qubit: research published today in Nature, along with data shared at the Station Q meeting, demonstrate our ability to harness a new type of material and engineer a radically different type of qubit that is small, fast, and digitally controlled.
  • A device roadmap to reliable quantum computation: our path from single-qubit devices to arrays that enable quantum error correction.
  • Building the world’s first fault-tolerant prototype (FTP) based on topological qubits: Microsoft is on track to build an FTP of a scalable quantum computer—in years, not decades—as part of the final phase of the Defense Advanced Research Projects Agency (DARPA) Underexplored Systems for Utility-Scale Quantum Computing (US2QC) program.

Together, these milestones mark a pivotal moment in quantum computing as we advance from scientific exploration to technological innovation.

Harnessing a new type of material

All of today’s announcements build on our team’s recent breakthrough: the world’s first topoconductor. This revolutionary class of materials enables us to create topological superconductivity, a new state of matter that previously existed only in theory. The advance stems from Microsoft’s innovations in the design and fabrication of gate-defined devices that combine indium arsenide (a semiconductor) and aluminum (a superconductor). When cooled to near absolute zero and tuned with magnetic fields, these devices form topological superconducting nanowires with Majorana Zero Modes (MZMs) at the wires’ ends.

Chris Vallance’s February 19, 2025 article for the British Broadcasting Corporation (BBC) news online website provides a description of Microsoft’s claims and makes note of the competitive quantum research environment,

Microsoft has unveiled a new chip called Majorana 1 that it says will enable the creation of quantum computers able to solve “meaningful, industrial-scale problems in years, not decades”.

It is the latest development in quantum computing – tech which uses principles of particle physics to create a new type of computer able to solve problems ordinary computers cannot.

Creating quantum computers powerful enough to solve important real-world problems is very challenging – and some experts believe them to be decades away.

Microsoft says this timetable can now be sped up because of the “transformative” progress it has made in developing the new chip involving a “topological conductor”, based on a new material it has produced.

The firm believes its topoconductor has the potential to be as revolutionary as the semiconductor was in the history of computing.

But experts have told the BBC more data is needed before the significance of the new research – and its effect on quantum computing – can be fully assessed.

Jensen Huang – boss of the leading chip firm, Nvidia – said in January he believed “very useful” quantum computing would come in 20 years.

Chetan Nayak, a technical fellow of quantum hardware at Microsoft, said he believed the developments would shake up conventional thinking about the future of quantum computers.

“Many people have said that quantum computing, that is to say useful quantum computers, are decades away,” he said. “I think that this brings us into years rather than decades.”

Travis Humble, director of the Quantum Science Center of Oak Ridge National Laboratory in the US, said he agreed Microsoft would now be able to deliver prototypes faster – but warned there remained work to do.

“The long term goals for solving industrial applications on quantum computers will require scaling up these prototypes even further,” he said.

While rivals produced a steady stream of announcements – notably Google’s “Willow” at the end of 2024 – Microsoft seemed to be taking longer.

Pursuing this approach was, in the company’s own words, a “high-risk, high-rewards” strategy, but one it now believes is going to pay off.

If you have the time, do read Vallance’s February 19, 2025 article.

The research paper

Purdue University’s (Indiana, US) February 25, 2025 news release on EurekAlert announces publication of the research, Note: Links have been removed,

Microsoft Quantum published an article in Nature on Feb. 19 [2025] detailing recent advances in the measurement of quantum devices that will be needed to realize a topological quantum computer. Among the authors are Microsoft scientists and engineers who conduct research at Microsoft Quantum Lab West Lafayette, located at Purdue University. In an announcement by Microsoft Quantum, the team describes the operation of a device that is a necessary building block for a topological quantum computer. The published results are an important milestone along the path to construction of quantum computers that are potentially more robust and powerful than existing technologies.

“Our hope for quantum computation is that it will aid chemists, materials scientists and engineers working on the design and manufacturing of new materials that are so important to our daily lives,” said Michael Manfra, scientific director of Microsoft Quantum Lab West Lafayette and the Bill and Dee O’Brien Distinguished Professor of Physics and Astronomy, professor of materials engineering, and professor of electrical and computer engineering at Purdue. “The promise of quantum computation is in accelerating scientific discovery and its translation into useful technology. For example, if quantum computers reduce the time and cost to produce new lifesaving therapeutic drugs, that is real societal impact.” 

The Microsoft Quantum Lab West Lafayette team advanced the complex layered materials that make up the quantum plane of the full device architecture used in the tests. Microsoft scientists working with Manfra are experts in advanced semiconductor growth techniques, including molecular beam epitaxy, that are used to build low-dimensional electron systems that form the basis for quantum bits, or qubits. They built the semiconductor and superconductor layers with atomic layer precision, tailoring the material’s properties to those needed for the device architecture.

Manfra, a member of the Purdue Quantum Science and Engineering Institute, credited the strong relationship between Purdue and Microsoft, built over the course of a decade, with the advances conducted at Microsoft Quantum Lab West Lafayette. In 2017 Purdue deepened its relationship with Microsoft with a multiyear agreement that includes embedding Microsoft employees with Manfra’s research team at Purdue.

“This was a collaborative effort by a very sophisticated team, with a vital contribution from the Microsoft scientists at Purdue,” Manfra said. “It’s a Microsoft team achievement, but it’s also the culmination of a long-standing partnership between Purdue and Microsoft. It wouldn’t have been possible without an environment at Purdue that was conducive to this mode of work — I attempted to blend industrial with academic research to the betterment of both communities. I think that’s a success story.”

Quantum science and engineering at Purdue is a pillar of the Purdue Computes initiative, which is focused on advancing research in computing, physical AI, semiconductors and quantum technologies.

“This research breakthrough in the measurement of the state of quasi particles is a milestone in the development of topological quantum computing, and creates a watershed moment in the semiconductor-superconductor hybrid structure,” Purdue President Mung Chiang said. “Marking also the latest success in the strategic initiative of Purdue Computes, the deep collaboration that Professor Manfra and his team have created with the Microsoft Quantum Lab West Lafayette on the Purdue campus exemplifies the most impactful industry research partnership at any American university today.”

Most approaches to quantum computers rely on local degrees of freedom to encode information. The spin of an electron is a classic example of a qubit. But an individual spin is prone to disturbance — by relatively common things like heat, vibrations or interactions with other quantum particles — which can corrupt quantum information stored in the qubit, necessitating a great deal of effort in detecting and correcting errors. Instead of spin, topological quantum computers store information in a more distributed manner; the qubit state is encoded in the state of many particles acting in concert. Consequently, it is harder to scramble the information as the state of all the particles must be changed to alter the qubit state.

In the Nature paper, the Microsoft team was able to accurately and quickly measure the state of quasi particles that form the basis of the qubit.

“The device is used to measure a basic property of a topological qubit quickly,” Manfra said. “The team is excited to build on these positive results.”

“The team in West Lafayette pushed existing epitaxial technology to a new state-of-the-art for semiconductor-superconductor hybrid structures to ensure a perfect interface between each of the building blocks of the Microsoft hybrid system,” said Sergei Gronin, a Microsoft Quantum Lab scientist.

“The materials quality that is required for quantum computing chips necessitates constant improvements, so that’s one of the biggest challenges,” Gronin said. “First, we had to adjust and improve semiconductor technology to meet a new level that nobody was able to achieve before. But equally important was how to create this hybrid system. To do that, we had to merge a semiconducting part and a superconducting part. And that means you need to perfect the semiconductor and the superconductor and perfect the interface between them.”

While work discussed in the Nature article was performed by Microsoft employees, the exposure to industrial-scale research and development is an outstanding opportunity for Purdue students in Manfra’s academic group as well. John Watson, Geoffrey Gardner and Saeed Fallahi, who are among the coauthors of the paper, earned their doctoral degrees under Manfra and now work for Microsoft Quantum at locations in Redmond, Washington, and Copenhagen, Denmark. Most of Manfra’s former students now work for quantum computing companies, including Microsoft. Tyler Lindemann, who works in the West Lafayette lab and helped to build the hybrid semiconductor-superconductor structures required for the device, is earning a doctoral degree from Purdue under Manfra’s supervision.

“Working in Professor Manfra’s lab in conjunction with my work for Microsoft Quantum has given me a head start in my professional development, and been fruitful for my academic work,” Lindemann said. “At the same time, many of the world-class scientists and engineers at Microsoft Quantum have some background in academia, and being able to draw from their knowledge and experience is an indispensable resource in my graduate studies. From both perspectives, it’s a great opportunity.”

Here’s a link to and a citation for the paper,

Interferometric single-shot parity measurement in InAs–Al hybrid devices by Microsoft Azure Quantum, Morteza Aghaee, Alejandro Alcaraz Ramirez, Zulfi Alam, Rizwan Ali, Mariusz Andrzejczuk, Andrey Antipov, Mikhail Astafev, Amin Barzegar, Bela Bauer, Jonathan Becker, Umesh Kumar Bhaskar, Alex Bocharov, Srini Boddapati, David Bohn, Jouri Bommer, Leo Bourdet, Arnaud Bousquet, Samuel Boutin, Lucas Casparis, Benjamin J. Chapman, Sohail Chatoor, Anna Wulff Christensen, Cassandra Chua, Patrick Codd, William Cole, Paul Cooper, Fabiano Corsetti, Ajuan Cui, Paolo Dalpasso, Juan Pablo Dehollain, Gijs de Lange, Michiel de Moor, Andreas Ekefjärd, Tareq El Dandachi, Juan Carlos Estrada Saldaña, Saeed Fallahi, Luca Galletti, Geoff Gardner, Deshan Govender, Flavio Griggio, Ruben Grigoryan, Sebastian Grijalva, Sergei Gronin, Jan Gukelberger, Marzie Hamdast, Firas Hamze, Esben Bork Hansen, Sebastian Heedt, Zahra Heidarnia, Jesús Herranz Zamorano, Samantha Ho, Laurens Holgaard, John Hornibrook, Jinnapat Indrapiromkul, Henrik Ingerslev, Lovro Ivancevic, Thomas Jensen, Jaspreet Jhoja, Jeffrey Jones, Konstantin V. Kalashnikov, Ray Kallaher, Rachpon Kalra, Farhad Karimi, Torsten Karzig, Evelyn King, Maren Elisabeth Kloster, Christina Knapp, Dariusz Kocon, Jonne V. Koski, Pasi Kostamo, Mahesh Kumar, Tom Laeven, Thorvald Larsen, Jason Lee, Kyunghoon Lee, Grant Leum, Kongyi Li, Tyler Lindemann, Matthew Looij, Julie Love, Marijn Lucas, Roman Lutchyn, Morten Hannibal Madsen, Nash Madulid, Albert Malmros, Michael Manfra, Devashish Mantri, Signe Brynold Markussen, Esteban Martinez, Marco Mattila, Robert McNeil, Antonio B. Mei, Ryan V. Mishmash, Gopakumar Mohandas, Christian Mollgaard, Trevor Morgan, George Moussa, Chetan Nayak, Jens Hedegaard Nielsen, Jens Munk Nielsen, William Hvidtfelt Padkar Nielsen, Bas Nijholt, Mike Nystrom, Eoin O’Farrell, Thomas Ohki, Keita Otani, Brian Paquelet Wütz, Sebastian Pauka, Karl Petersson, Luca Petit, Dima Pikulin, Guen Prawiroatmodjo, Frank Preiss, Eduardo Puchol Morejon, Mohana Rajpalke, Craig Ranta, Katrine Rasmussen, David Razmadze, Outi Reentila, David J. Reilly, Yuan Ren, Ken Reneris, Richard Rouse, Ivan Sadovskyy, Lauri Sainiemi, Irene Sanlorenzo, Emma Schmidgall, Cristina Sfiligoj, Mustafeez Bashir Shah, Kevin Simoes, Shilpi Singh, Sarat Sinha, Thomas Soerensen, Patrick Sohr, Tomas Stankevic, Lieuwe Stek, Eric Stuppard, Henri Suominen, Judith Suter, Sam Teicher, Nivetha Thiyagarajah, Raj Tholapi, Mason Thomas, Emily Toomey, Josh Tracy, Michelle Turley, Shivendra Upadhyay, Ivan Urban, Kevin Van Hoogdalem, David J. Van Woerkom, Dmitrii V. Viazmitinov, Dominik Vogel, John Watson, Alex Webster, Joseph Weston, Georg W. Winkler, Di Xu, Chung Kai Yang, Emrah Yucelen, Roland Zeisel, Guoji Zheng & Justin Zilke. Nature 638, 651–655 (2025). DOI: https://doi.org/10.1038/s41586-024-08445-2 Published online: 19 February 2025 Issue Date: 20 February 2025

This paper is open access. Note: I usually tag all of the authors but not this time.

Controversy over this and previous Microsoft quantum computing claims

Elizabeth Hlavinka’s March 17, 2025 article for Salon.com provides an overview, Note: Links have been removed,

The matter making up the world around us has long-since been organized into three neat categories: solids, liquids and gases. But last month [February 2025], Microsoft announced that it had allegedly discovered another state of matter originally theorized to exist in 1937. 

This new state of matter called the Majorana zero mode is made up of quasiparticles, which act as their own particle and antiparticle. The idea is that the Majorana zero mode could be used to build a quantum computer, which could help scientists answer complex questions that standard computers are not capable of solving, with implications for medicine, cybersecurity and artificial intelligence.

In late February [2025], Sen. Ted Cruz presented Microsoft’s new computer chip at a congressional hearing, saying, “Technologies like this new chip I hold in the palm of my hand, the Majorana 1 quantum chip, are unlocking a new era of computing that will transform industries from health care to energy, solving problems that today’s computers simply cannot.”

However, Microsoft’s announcement, claiming a “breakthrough in quantum computing,” was met with skepticism from some physicists in the field. Proving that this form of quantum computing can work requires first demonstrating the existence of Majorana quasiparticles, measuring what the Majorana particles are doing, and creating something called a topological qubit used to store quantum information.

But some say that not all of the data necessary to prove this has been included in the research paper published in Nature, on which this announcement is based. And due to a fraught history of similar claims from the company being disputed and ultimately rescinded, some are extra wary of the results. [emphasis mine]

It’s not the first time Microsoft has faced backlash from presenting findings in the field. In 2018, the company reported that they had detected the presence of Majorana zero-modes in a research paper, but it was retracted by Nature, the journal that published it after a report from independent experts put their findings under more intense scrutiny.

In the [2018] report, four physicists not involved in the research concluded that it did not appear that Microsoft had intentionally misrepresented the data, but instead seemed to be “caught up in the excitement of the moment [emphasis mine].”

Establishing the existence of these particles is extremely complex in part because disorder in the device can create signals that mimic these quasiparticles when they are not actually there. 

Modern computers in use today are encoded in bits, which can either be in a zero state (no current flowing through them), or a one state (current flowing.) These bits work together to send information and signals that communicate with the computer, powering everything from cell phones to video games.

Companies like Google, IBM and Amazon have invested in designing another form of quantum computer that uses chips built with “qubits,” or quantum bits. Qubits can exist in both zero and one states at the same time due to a phenomenon called superposition. 

However, qubits are subject to external noise from the environment that can affect their performance, said Dr. Paolo Molignini, a researcher in theoretical quantum physics at Stockholm University.

“Because qubits are in a superposition of zero and one, they are very prone to errors and they are very prone to what is called decoherence, which means there could be noise, thermal fluctuations or many things that can collapse the state of the qubits,” Molignini told Salon in a video call. “Then you basically lose all of the information that you were encoding.”

In December [2024], Google said its quantum computer could perform a calculation that a standard computer could complete in 10 septillion years — a period far longer than the age of the universe — in just under five minutes.

However, a general-purpose computer would require billions of qubits, so these approaches are still a far cry from having practical applications, said Dr. Patrick Lee, a physicist at the Massachusetts Institute of Technology [MIT], who co-authored the report leading to the 2018 Nature paper’s retraction.

Microsoft is taking a different approach to quantum computing by trying to develop  a topological qubit, which has the ability to store information in multiple places at once. Topological qubits exist within the Majorana zero states and are appealing because they can theoretically offer greater protection against environmental noise that destroys information within a quantum system.

Think of it like an arrow, where the arrowhead holds a portion of the information and the arrow tail holds the rest, Lee said. Distributing information across space like this is called topological protection.

“If you are able to put them far apart from each other, then you have a chance of maintaining the identity of the arrow even if it is subject to noise,” Lee told Salon in a phone interview. “The idea is that if the noise affects the head, it doesn’t kill the arrow and if it affects only the tail it doesn’t kill your arrow. It has to affect both sides simultaneously to kill your arrow, and that is very unlikely if you are able to put them apart.”

… Lee believes that even if the data doesn’t entirely prove that topological qubits exist in the Majorana zero-state, it still represents a scientific advancement. But he noted that several important issues need to be solved before it has practical implications. For one, the coherence time of these particles — or how long they can exist without being affected by environmental noise — is still very short, he explained.

“They make a measurement, come back, and the qubit has changed, so you have lost your coherence,” Lee said. “With this very short time, you cannot do anything with it.”

“I just wish they [Microsoft] were a bit more careful with their claims because I fear that if they don’t measure up to what they are saying, there might be a backlash at some point where people say, ‘You promised us all these fancy things and where are they now?’” Molignini said. “That might damage the entire quantum community, not just themselves.”

Iif you have the time, please read Hlavinka’s March 17, 2025 article in its entirety .

D-Wave Quantum Systems claims quantum supremacy over real world problem solution

A March 15, 2025 article by Bob Yirka for phys.org announces the news from D-Wave Quantum Systems. Note: The company, which had its headquarters in Canada (Burnaby, BC) now seems to be a largely US company with its main headquarters in Palo Alto, California and an ancillary or junior (?) headquarters in Canada, Note: A link has been removed,

A team of quantum computer researchers at quantum computer maker D-Wave, working with an international team of physicists and engineers, is claiming that its latest quantum processor has been used to run a quantum simulation faster than could be done with a classical computer.

In their paper published in the journal Science, the group describes how they ran a quantum version of a mathematical approximation regarding how matter behaves when it changes states, such as from a gas to a liquid—in a way that they claim would be nearly impossible to conduct on a traditional computer.

Here’s a March 12, 2025 D-Wave Systems (now D-Wave Quantum Systems) news release touting its real world problem solving quantum supremacy,

New landmark peer-reviewed paper published in Science, “Beyond-Classical Computation in Quantum Simulation,” unequivocally validates D-Wave’s achievement of the world’s first and only demonstration of quantum computational supremacy on a useful, real-world problem

Research shows D-Wave annealing quantum computer performs magnetic materials simulation in minutes that would take nearly one million years and more than the world’s annual electricity consumption to solve using a classical supercomputer built with GPU clusters

D-Wave Advantage2 annealing quantum computer prototype used in supremacy achievement, a testament to the system’s remarkable performance capabilities

PALO ALTO, Calif. – March 12, 2025 – D-Wave Quantum Inc. (NYSE: QBTS) (“D-Wave” or the “Company”), a leader in quantum computing systems, software, and services and the world’s first commercial supplier of quantum computers, today announced a scientific breakthrough published in the esteemed journal Science, confirming that its annealing quantum computer outperformed one of the world’s most powerful classical supercomputers in solving complex magnetic materials simulation problems with relevance to materials discovery. The new landmark peer-reviewed paper, Beyond-Classical Computation in Quantum Simulation,” validates this achievement as the world’s first and only demonstration of quantum computational supremacy on a useful problem.

An international collaboration of scientists led by D-Wave performed simulations of quantum dynamics in programmable spin glasses—computationally hard magnetic materials simulation problems with known applications to business and science—on both D-Wave’s Advantage2TM prototype annealing quantum computer and the Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory. The work simulated the behavior of a suite of lattice structures and sizes across a variety of evolution times and delivered a multiplicity of important material properties. D-Wave’s quantum computer performed the most complex simulation in minutes and with a level of accuracy that would take nearly one million years using the supercomputer. In addition, it would require more than the world’s annual electricity consumption to solve this problem using the supercomputer, which is built with graphics processing unit (GPU) clusters.

“This is a remarkable day for quantum computing. Our demonstration of quantum computational supremacy on a useful problem is an industry first. All other claims of quantum systems outperforming classical computers have been disputed or involved random number generation of no practical value,” said Dr. Alan Baratz, CEO of D-Wave. “Our achievement shows, without question, that D-Wave’s annealing quantum computers are now capable of solving useful problems beyond the reach of the world’s most powerful supercomputers. We are thrilled that D-Wave customers can use this technology today to realize tangible value from annealing quantum computers.”

Realizing an Industry-First Quantum Computing Milestone
The behavior of materials is governed by the laws of quantum physics. Understanding the quantum nature of magnetic materials is crucial to finding new ways to use them for technological advancement, making materials simulation and discovery a vital area of research for D-Wave and the broader scientific community. Magnetic materials simulations, like those conducted in this work, use computer models to study how tiny particles not visible to the human eye react to external factors. Magnetic materials are widely used in medical imaging, electronics, superconductors, electrical networks, sensors, and motors.

“This research proves that D-Wave’s quantum computers can reliably solve quantum dynamics problems that could lead to discovery of new materials,” said Dr. Andrew King, senior distinguished scientist at D-Wave. “Through D-Wave’s technology, we can create and manipulate programmable quantum matter in ways that were impossible even a few years ago.”

Materials discovery is a computationally complex, energy-intensive and expensive task. Today’s supercomputers and high-performance computing (HPC) centers, which are built with tens of thousands of GPUs, do not always have the computational processing power to conduct complex materials simulations in a timely or energy-efficient manner. For decades, scientists have aspired to build a quantum computer capable of solving complex materials simulation problems beyond the reach of classical computers. D-Wave’s advancements in quantum hardware have made it possible for its annealing quantum computers to process these types of problems for the first time.

“This is a significant milestone made possible through over 25 years of research and hardware development at D-Wave, two years of collaboration across 11 institutions worldwide, and more than 100,000 GPU and CPU hours of simulation on one of the world’s fastest supercomputers as well as computing clusters in collaborating institutions,” said Dr. Mohammad Amin, chief scientist at D-Wave. “Besides realizing Richard Feynman’s vision of simulating nature on a quantum computer, this research could open new frontiers for scientific discovery and quantum application development.” 

Advantage2 System Demonstrates Powerful Performance Gains
The results shown in “Beyond-Classical Computation in Quantum Simulation” were enabled by D-Wave’s previous scientific milestones published in Nature Physics (2022) and Nature (2023), which theoretically and experimentally showed that quantum annealing provides a quantum speedup in complex optimization problems. These scientific advancements led to the development of the Advantage2 prototype’s fast anneal feature, which played an essential role in performing the precise quantum calculations needed to demonstrate quantum computational supremacy.

“The broader quantum computing research and development community is collectively building an understanding of the types of computations for which quantum computing can overtake classical computing. This effort requires ongoing and rigorous experimentation,” said Dr. Trevor Lanting, chief development officer at D-Wave. “This work is an important step toward sharpening that understanding, with clear evidence of where our quantum computer was able to outperform classical methods. We believe that the ability to recreate the entire suite of results we produced is not possible classically. We encourage our peers in academia to continue efforts to further define the line between quantum and classical capabilities, and we believe these efforts will help drive the development of ever more powerful quantum computing technology.”

The Advantage2 prototype used to achieve quantum computational supremacy is available for customers to use today via D-Wave’s Leap™ real-time quantum cloud service. The prototype provides substantial performance improvements from previous-generation Advantage systems, including increased qubit coherence, connectivity, and energy scale, which enables higher-quality solutions to larger, more complex problems. Moreover, D-Wave now has an Advantage2 processor that is four times larger than the prototype used in this work and has extended the simulations of this paper from hundreds of qubits to thousands of qubits, which are significantly larger than those described in this paper.

Leading Industry Voices Echo Support
Dr. Hidetoshi Nishimori, Professor, Department of Physics, Tokyo Institute of Technology:
“This paper marks a significant milestone in demonstrating the real-world applicability of large-scale quantum computing. Through rigorous benchmarking of quantum annealers against state-of-the-art classical methods, it convincingly establishes a quantum advantage in tackling practical problems, revealing the transformative potential of quantum computing at an unprecedented scale.”

Dr. Seth Lloyd, Professor of Quantum Mechanical Engineering, MIT:
Although large-scale, fully error-corrected quantum computers are years in the future, quantum annealers can probe the features of quantum systems today. In an elegant paper, the D-Wave group has used a large-scale quantum annealer to uncover patterns of entanglement in a complex quantum system that lie far beyond the reach of the most powerful classical computer. The D-Wave result shows the promise of quantum annealers for exploring exotic quantum effects in a wide variety of systems.”

Dr. Travis Humble, Director of Quantum Science Center, Distinguished Scientist at Oak Ridge National Laboratory:
“ORNL seeks to expand the frontiers of computation through many different avenues, and benchmarking quantum computing for materials science applications provides critical input to our understanding of new computational capabilities.”

Dr. Juan Carrasquilla, Associate Professor at the Department of Physics, ETH Zürich:
“I believe these results mark a critical scientific milestone for D-Wave. They also serve as an invitation to the scientific community, as these results offer a strong benchmark and motivation for developing novel simulation techniques for out-of-equilibrium dynamics in quantum many-body physics. Furthermore, I hope these findings encourage theoretical exploration of the computational challenges involved in performing such simulations, both classically and quantum-mechanically.”

Dr. Victor Martin-Mayor, Professor of Theoretical Physics, Universidad Complutense de Madrid:
“This paper is not only a tour-de-force for experimental physics, it is also remarkable for the clarity of the results. The authors have addressed a problem that is regarded both as important and as very challenging to a classical computer. The team has shown that their quantum annealer performs better at this task than the state-of-the-art methods for classical simulation.”

Dr. Alberto Nocera, Senior Staff Scientist, The University of British Columbia:
“Our work shows the impracticability of state-of-the-art classical simulations to simulate the dynamics of quantum magnets, opening the door for quantum technologies based on analog simulators to solve scientific questions that may otherwise remain unanswered using conventional computers.”

About D-Wave Quantum Inc.
D-Wave is a leader in the development and delivery of quantum computing systems, software, and services. We are the world’s first commercial supplier of quantum computers, and the only company building both annealing and gate-model quantum computers. Our mission is to help customers realize the value of quantum, today. Our 5,000+ qubit Advantage™ quantum computers, the world’s largest, are available on-premises or via the cloud, supported by 99.9% availability and uptime. More than 100 organizations trust D-Wave with their toughest computational challenges. With over 200 million problems submitted to our Advantage systems and Advantage2™ prototypes to date, our customers apply our technology to address use cases spanning optimization, artificial intelligence, research and more. Learn more about realizing the value of quantum computing today and how we’re shaping the quantum-driven industrial and societal advancements of tomorrow: www.dwavequantum.com.

Forward-Looking Statements
Certain statements in this press release are forward-looking, as defined in the Private Securities Litigation Reform Act of 1995. These statements involve risks, uncertainties, and other factors that may cause actual results to differ materially from the information expressed or implied by these forward-looking statements and may not be indicative of future results. These forward-looking statements are subject to a number of risks and uncertainties, including, among others, various factors beyond management’s control, including the risks set forth under the heading “Risk Factors” discussed under the caption “Item 1A. Risk Factors” in Part I of our most recent Annual Report on Form 10-K or any updates discussed under the caption “Item 1A. Risk Factors” in Part II of our Quarterly Reports on Form 10-Q and in our other filings with the SEC. Undue reliance should not be placed on the forward-looking statements in this press release in making an investment decision, which are based on information available to us on the date hereof. We undertake no duty to update this information unless required by law.

Here’s a link to and a citation for the most recent paper,

Beyond-classical computation in quantum simulation by Andrew D. King , Alberto Nocera, Marek M. Rams, Jacek Dziarmaga, Roeland Wiersema, William Bernoudy, Jack Raymond, Nitin Kaushal, Niclas Heinsdorf, Richard Harris, Kelly Boothby, Fabio Altomare, Mohsen Asad, Andrew J. Berkley, Martin Boschnak, Kevin Chern, Holly Christiani, Samantha Cibere, Jake Connor, Martin H. Dehn, Rahul Deshpande, Sara Ejtemaee, Pau Farre, Kelsey Hamer, Emile Hoskinson, Shuiyuan Huang, Mark W. Johnson, Samuel Kortas, Eric Ladizinsky, Trevor Lanting, Tony Lai, Ryan Li, Allison J. R. MacDonald, Gaelen Marsden, Catherine C. McGeoch, Reza Molavi, Travis Oh, Richard Neufeld, Mana Norouzpour, Joel Pasvolsky, Patrick Poitras, Gabriel Poulin-Lamarre, Thomas Prescott, Mauricio Reis, Chris Rich, Mohammad Samani, Benjamin Sheldan, Anatoly Smirnov, Edward Sterpka, Berta Trullas Clavera, Nicholas Tsai, Mark Volkmann, Alexander M. Whiticar, Jed D. Whittaker, Warren Wilkinson, Jason Yao, T.J. Yi, Anders W. Sandvik, Gonzalo Alvarez, Roger G. Melko, Juan Carrasquilla, Marcel Franz, and Mohammad H. Amin. Science 12 Mar 2025 First Release DOI: 10.1126/science.ado6285

This paper appears to be open access.Note: I usually tag all of the authors but not this time either.

A controversy of sorts

Madison McLauchlan’s March 19, 2025 article for Betakit (website for Canadian Startup News & Tech Innovation), Note: Links have been removed,

Canadian-born company D-Wave Quantum Systems said it achieved “quantum supremacy” last week after publishing what it calls a groundbreaking paper in the prestigious journal Science. Despite the lofty term, Canadian experts say supremacy is not the be-all, end-all of quantum innovation. 

D-Wave, which has labs in Palo Alto, Calif., and Burnaby, BC, claimed in a statement that it has shown “the world’s first and only demonstration of quantum computational supremacy on a useful, real-world problem.”

Coined in the early 2010s by physicist John Preskill, quantum supremacy is the ability of a quantum computing system to solve a problem no classical computer can in a feasible amount of time. The metric makes no mention of whether the problem needs to be useful or relevant to real life. Google researchers published a paper in Nature in 2019 claiming they cleared that bar with the Sycamore quantum processor. Researchers at the University of Science and Technology in China claimed they demonstrated quantum supremacy several times. 

D-Wave’s attempt differs in that its researchers aimed to solve a real-world materials-simulation problem with quantum computing—one the company claims would be nearly impossible for a traditional computer to solve in a reasonable amount of time. D-Wave used an annealing designed to solve optimization problems. The problem is represented like an energy space, where the “lowest energy state” corresponds to the solution. 

While exciting, quantum supremacy is just one metric among several that mark the progress toward widely useful quantum computers, industry experts told BetaKit. 

“It is a very important and mostly academic metric, but certainly not the most important in the grand scheme of things, as it doesn’t take into account the usefulness of the algorithm,” said Martin Laforest, managing partner at Quantacet, a specialized venture capital fund for quantum startups. 

He added that Google and Xanadu’s [Xanadu Quantum Technologies based in Toronto, Canada] past claims to quantum supremacy were “extraordinary pieces of work, but didn’t unlock practicality.” 

Laforest, along with executives at Canadian quantum startups Nord Quantique and Photonic, say that the milestones of ‘quantum utility’ or ‘quantum advantage’ may be more important than supremacy. 

According to Quantum computing company Quera [QuEra?], quantum advantage is the demonstration of a quantum algorithm solving a real-world problem on a quantum computer faster than any classical algorithm running on any classical computer. On the other hand, quantum utility, according to IBM, refers to when a quantum computer is able to perform reliable computations at a scale beyond brute-force classical computing methods that provide exact solutions to computational problems. 

Error correction hasn’t traditionally been considered a requirement for quantum supremacy, but Laforest told BetaKit the term is “an ever-moving target, constantly challenged by advances in classical algorithms.” He added: “In my opinion, some level of supremacy or utility may be possible in niche areas without error correction, but true disruption requires it.”

Paul Terry, CEO of Vancouver-based Photonic, thinks that though D-Wave’s claim to quantum supremacy shows “continued progress to real value,” scalability is the industry’s biggest hurdle to overcome.

But as with many milestone claims in the quantum space, D-Wave’s latest innovation has been met with scrutiny from industry competitors and researchers on the breakthrough’s significance, claiming that classical computers have achieved similar results. Laforest echoed this sentiment.

“Personally, I wouldn’t say it’s an unequivocal demonstration of supremacy, but it is a damn nice experiment that once again shows the murky zone between traditional computing and early quantum advantage,” Laforest said.

Originally founded out of the University of British Columbia, D-Wave went public on the New York Stock Exchange just over two years ago through a merger with a special-purpose acquisition company in 2022. D-Wave became a Delaware-domiciled corporation as part of the deal.

Earlier this year, D-Wave’s stock price dropped after Nvidia CEO Jensen Huang publicly stated that he estimated that useful quantum computers were more than 15 years away. D-Wave’s stock price, which had been struggling, has seen a considerable bump in recent months alongside a broader boost in the quantum market. The price popped after its most recent earnings, shared right after its quantum supremacy announcement. 

The beat goes on

Some of this is standard in science. There’s always a debate over big claims and it’s not unusual for people to get over excited and have to make a retraction. Scientists are people too. That said, there’s a lot of money on the line and that appears to be making situation even more volatile than usual.

That last paragraph was completed on the morning of March 21, 2025 and later that afternoon I came across this March 21, 2025 article by Michael Grothaus for Fast Company, Note: Links have been removed,

Quantum computing stocks got pummeled yesterday, with the four most prominent public quantum computing companies—IonQ, Rigetti Computing, Quantum Computing Inc., and D-Wave Quantum Inc.—falling anywhere from over 9% to over 18%. The reason? A lot of it may have to do with AI chip giant Nvidia. Again.

Stocks crash yesterday on Nvidia quantum news

Yesterday was a bit of a bloodbath on the stock market for the four most prominent publicly traded quantum computing companies. …

All four of these quantum computing stocks [IonQ, Inc.; Rigetti Computing, Inc.; Quantum Computing Inc.; D-Wave Quantum Inc.] tumbled on the day that AI chip giant Nvidia kicked off its two-day Quantum Day event. In a blog post from January 14 announcing Quantum Day, Nvidia said the event “brings together leading experts for a comprehensive and balanced perspective on what businesses should expect from quantum computing in the coming decades — mapping the path toward useful quantum applications.”

Besides bringing quantum experts together, the AI behemoth also announced that it will be launching a new quantum computing research center in Boston.

Called the NVIDIA Accelerated Quantum Research Center (NVAQC), the new research lab “will help solve quantum computing’s most challenging problems, ranging from qubit noise to transforming experimental quantum processors into practical devices,” the company said in a press release.

The NVAQC’s location in Boston means it will be near both Harvard University and the Massachusetts Institute of Technology (MIT). 

Before Nvidia’s announcement yesterday, IonQ, Rigetti, D-Wave, and Quantum Computing Inc. were the leaders in the nascent field of quantum computing. And while they still are right now (Nvidia’s quantum research lab hasn’t been built yet), the fear is that Nvidia could use its deep pockets to quickly buy its way into a leadership spot in the field. With its $2.9 trillion market cap, the company can easily afford to throw billions of research dollars into quantum computing.

As noted by the Motley Fool, the location of the NVIDIA Accelerated Quantum Research Center in Boston will also allow Nvidia to more easily tap into top quantum talent from Harvard and MIT—talent that may have otherwise gone to IonQ, Rigetti, D-Wave, and Quantum Computing Inc.

Nvidia’s announcement is a massive about-face from the company in regard to how it views quantum computing. It’s also the second time that Nvidia has caused quantum stocks to crash this year. Back in January, shares in prominent quantum computing companies fell after Huang said that practical use of quantum computing was decades away.

Those comments were something quantum computing company CEOs like D-Wave’s Alan Baratz took issue with. “It’s an egregious error on Mr. Huang’s part,” Bartaz told Fast Company at the time. “We’re not decades away from commercial quantum computers. They exist. There are companies that are using our quantum computer today.”

According to Investor’s Business Daily, Huang reportedly got the idea for Nvidia’s Quantum Day event after the blowback to his comments, inviting quantum computing executives to the event to explain why he was incorrect about quantum computing.

The word is volatile.

Congratulations to winners of 2020 Nobel Prize for Chemistry: Dr. Emmanuelle Charpentier & Dr. Jennifer A. Doudna (CRISPR-cas9)

It’s possible there’s a more dramatic development in the field of contemporary gene-editing but it’s indisputable that CRISPR (clustered regularly interspaced short palindromic repeats) -cas9 (CRISPR-associated 9 [protein]) ranks very highly indeed.

The technique, first discovered (or developed) in 2012, has brought recognition in the form of the 2020 Nobel Prize for Chemistry to CRISPR’s two discoverers, Emanuelle Charpentier and Jennifer Doudna.

An October 7, 2020 news item on phys.org announces the news,

The Nobel Prize in chemistry went to two researchers Wednesday [October 7, 2020] for a gene-editing tool that has revolutionized science by providing a way to alter DNA, the code of life—technology already being used to try to cure a host of diseases and raise better crops and livestock.

Emmanuelle Charpentier of France and Jennifer A. Doudna of the United States won for developing CRISPR-cas9, a very simple technique for cutting a gene at a specific spot, allowing scientists to operate on flaws that are the root cause of many diseases.

“There is enormous power in this genetic tool,” said Claes Gustafsson, chair of the Nobel Committee for Chemistry.

More than 100 clinical trials are underway to study using CRISPR to treat diseases, and “many are very promising,” according to Victor Dzau, president of the [US] National Academy of Medicine.

“My greatest hope is that it’s used for good, to uncover new mysteries in biology and to benefit humankind,” said Doudna, who is affiliated with the University of California, Berkeley, and is paid by the Howard Hughes Medical Institute, which also supports The Associated Press’ Health and Science Department.

The prize-winning work has opened the door to some thorny ethical issues: When editing is done after birth, the alterations are confined to that person. Scientists fear CRISPR will be misused to make “designer babies” by altering eggs, embryos or sperm—changes that can be passed on to future generations.

Unusually for phys.org, this October 7, 2020 news item is not a simple press/news release reproduced in its entirety but a good overview of the researchers’ accomplishments and a discussion of some of the issues associated with CRISPR along with the press release at the end.

I have covered some CRISPR issues here including intellectual property (see my March 15, 2017 posting titled, “CRISPR patent decision: Harvard’s and MIT’s Broad Institute victorious—for now‘) and designer babies (as exemplified by the situation with Dr. He Jiankui; see my July 28, 2020 post titled, “July 2020 update on Dr. He Jiankui (the CRISPR twins) situation” for more details about it).

An October 7, 2020 article by Michael Grothaus for Fast Company provides a business perspective (Note: A link has been removed),

Needless to say, research by the two scientists awarded the Nobel Prize in Chemistry today has the potential to change the course of humanity. And with that potential comes lots of VC money and companies vying for patents on techniques and therapies derived from Charpentier’s and Doudna’s research.

One such company is Doudna’s Editas Medicine [according to my search, the only company associated with Doudna is Mammoth Biosciences, which she co-founded], while others include Caribou Biosciences, Intellia Therapeutics, and Casebia Therapeutics. Given the world-changing applications—and the amount of revenue such CRISPR therapies could bring in—it’s no wonder that such rivalry is often heated (and in some cases has led to lawsuits over the technology and its patents).

As Doudna explained in her book, A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution, cowritten by Samuel H. Sternberg …, “… —but we could also have woolly mammoths, winged lizards, and unicorns.” And as for that last part, she made clear, “No, I am not kidding.”

Everybody makes mistakes and the reference to Editas Medicine is the only error I spotted. You can find out more about Mammoth Biosciences here and while Dr. Doudna’s comment, “My greatest hope is that it’s used for good, to uncover new mysteries in biology and to benefit humankind,” is laudable it would seem she wishes to profit from the discovery. Mammoth Biosciences is a for-profit company as can be seen at the end of the Mammoth Biosciences’ October 7, 2020 congratulatory news release,

About Mammoth Biosciences

Mammoth Biosciences is harnessing the diversity of nature to power the next-generation of CRISPR products. Through the discovery and development of novel CRISPR systems, the company is enabling the full potential of its platform to read and write the code of life. By leveraging its internal research and development and exclusive licensing to patents related to Cas12, Cas13, Cas14 and Casɸ, Mammoth Biosciences can provide enhanced diagnostics and genome editing for life science research, healthcare, agriculture, biodefense and more. Based in San Francisco, Mammoth Biosciences is co-founded by CRISPR pioneer Jennifer Doudna and Trevor Martin, Janice Chen, and Lucas Harrington. The firm is backed by top institutional investors [emphasis mine] including Decheng, Mayfield, NFX, and 8VC, and leading individual investors including Brook Byers, Tim Cook, and Jeff Huber.

An October 7, 2029 Nobel Prize press release, which unleashed all this interest in Doudna and Charpentier, notes this,

Prize amount: 10 million Swedish kronor, to be shared equally between the Laureates.

In Canadian money that amount is $1,492,115.03 (as of Oct. 9, 2020 12:40 PDT when I checked a currency converter).

Ordinarily there’d be a mildly caustic comment from me about business opportunities and medical research but this is a time for congratulations to both Dr. Emanuelle Charpentier and Dr. Jennifer Doudna.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

Cardiac patch that’s both organic and engineered: a cyborg heart patch

A March 15, 2016 article by Michael Grothaus for Fast Company breaks the news about the ‘cyborg’ heart patch (Note: A link has been removed),

Researchers at Tel Aviv University’s Department of Biotechnology, Department of Materials Science and Engineering, and Center for Nanoscience and Nanotechnology have created a “cyborg heart patch” that may “single-handedly change the field of cardiac research,” reports EurekAlert. …

The researchers have made an illustration of the cyborg heart patch available,

 

Caption: A remotely regulated living bionic heart is pictured. The engineered tissue is comprised of living cardiac cells, polymers, and a complex nanoelectronic system. This integrated electronic system provides enhanced capabilities, such as online sensing of heart contraction, and pacing when needed. In addition, the electronics can control the release of growth factors and drugs, for stem cell recruitment and to decrease inflammation after transplantation. Credit: Tel Aviv University

A March 14, 2016 American Friends of Tel Aviv University news release (also on EurekAlert) expands on the theme,

More than 25% of the people on the national US waiting list for a heart will die before receiving one. Despite this discouraging figure, heart transplants are still on the rise. There just hasn’t been an alternative. Until now.

The “cyborg heart patch,” a new engineering innovation from Tel Aviv University, may single-handedly change the field of cardiac research. The bionic heart patch combines organic and engineered parts. In fact, its capabilities surpass those of human tissue alone. The patch contracts and expands like human heart tissue but regulates itself like a machine.

The invention is the brainchild of Prof. Tal Dvir and PhD student Ron Feiner of TAU’s Department of Biotechnology, Department of Materials Science and Engineering, and Center for Nanoscience and Nanotechnology. Their study was published today in the journal Nature Materials.

Science fiction becomes science fact

“With this heart patch, we have integrated electronics and living tissue,” Dr. Dvir said. “It’s very science fiction, but it’s already here, and we expect it to move cardiac research forward in a big way.

“Until now, we could only engineer organic cardiac tissue, with mixed results. Now we have produced viable bionic tissue, which ensures that the heart tissue will function properly.”

Prof. Dvir’s Tissue Engineering and Regenerative Medicine Lab at TAU has been at the forefront of cardiac research for the last five years, harnessing sophisticated nanotechnological tools to develop functional substitutes for tissue permanently damaged by heart attacks and cardiac disease. The new cyborg cardiac patch not only replaces organic tissue but also ensures its sound functioning through remote monitoring.

“We first ensured that the cells would contract in the patch, which explains the need for organic material,” said Dr. Dvir. “But, just as importantly, we needed to verify what was happening in the patch and regulate its function. We also wanted to be able to release drugs from the patch directly onto the heart to improve its integration with the host body.”

For the new bionic patch, Dr. Dvir and his team engineered thick bionic tissue suitable for transplantation. The engineered tissue features electronics that sense tissue function and accordingly provide electrical stimulation. In addition, electroactive polymers are integrated with the electronics. Upon activation, these polymers are able to release medication, such as growth factors or small molecules on demand.

Cardiac therapy in real time

“Imagine that a patient is just sitting at home, not feeling well,” Dr. Dvir said. “His physician will be able to log onto his computer and this patient’s file — in real time. He can view data sent remotely from sensors embedded in the engineered tissue and assess exactly how his patient is doing. He can intervene to properly pace the heart and activate drugs to regenerate tissue from afar.

“The longer-term goal is for the cardiac patch to be able to regulate its own welfare. In other words, if it senses inflammation, it will release an anti-inflammatory drug. If it senses a lack of oxygen, it will release molecules that recruit blood-vessel-forming cells to the heart.”

Dr. Dvir is currently examining how his proof of concept could apply to the brain and spinal cord to treat neurological conditions.

“This is a breakthrough, to be sure,” Dr. Dvir said. “But I would not suggest binging on cheeseburgers or quitting sports just yet. The practical realization of the technology may take some time. Meanwhile, a healthy lifestyle is still the best way to keep your heart healthy.”

It’s exciting news but this is at the proof-of-concept stage and there has been no testing, which (as Dvir seems to be hinting) means it could be several years before clinical trials.

Getting back to the heart of the matter (wordplay intended), here’s a link to and a citation for the paper,

Engineered hybrid cardiac patches with multifunctional electronics for online monitoring and regulation of tissue function by Ron Feiner, Leeya Engel, Sharon Fleischer, Maayan Malki, Idan Gal, Assaf Shapira, Yosi Shacham-Diamand & Tal Dvir. Nature Materials (2016) doi:10.1038/nmat4590 Published online 14 March 2016

This paper is behind a paywall.

Cambridge University researchers tell us why Spiderman can’t exist while Stanford University proves otherwise

A team of zoology researchers at Cambridge University (UK) find themselves in the unenviable position of having their peer-reviewed study used as a source of unintentional humour. I gather zoologists (Cambridge) and engineers (Stanford) don’t have much opportunity to share information.

A Jan. 18, 2016 news item on ScienceDaily announces the Cambridge research findings,

Latest research reveals why geckos are the largest animals able to scale smooth vertical walls — even larger climbers would require unmanageably large sticky footpads. Scientists estimate that a human would need adhesive pads covering 40% of their body surface in order to walk up a wall like Spiderman, and believe their insights have implications for the feasibility of large-scale, gecko-like adhesives.

A Jan. 18, 2016 Cambridge University press release (also on EurekAlert), which originated the news item, describes the research and the thinking that led to the researchers’ conclusions,

Dr David Labonte and his colleagues in the University of Cambridge’s Department of Zoology found that tiny mites use approximately 200 times less of their total body area for adhesive pads than geckos, nature’s largest adhesion-based climbers. And humans? We’d need about 40% of our total body surface, or roughly 80% of our front, to be covered in sticky footpads if we wanted to do a convincing Spiderman impression.

Once an animal is big enough to need a substantial fraction of its body surface to be covered in sticky footpads, the necessary morphological changes would make the evolution of this trait impractical, suggests Labonte.

“If a human, for example, wanted to walk up a wall the way a gecko does, we’d need impractically large sticky feet – our shoes would need to be a European size 145 or a US size 114,” says Walter Federle, senior author also from Cambridge’s Department of Zoology.

The researchers say that these insights into the size limits of sticky footpads could have profound implications for developing large-scale bio-inspired adhesives, which are currently only effective on very small areas.

“As animals increase in size, the amount of body surface area per volume decreases – an ant has a lot of surface area and very little volume, and a blue whale is mostly volume with not much surface area” explains Labonte.

“This poses a problem for larger climbing species because, when they are bigger and heavier, they need more sticking power to be able to adhere to vertical or inverted surfaces, but they have comparatively less body surface available to cover with sticky footpads. This implies that there is a size limit to sticky footpads as an evolutionary solution to climbing – and that turns out to be about the size of a gecko.”

Larger animals have evolved alternative strategies to help them climb, such as claws and toes to grip with.

The researchers compared the weight and footpad size of 225 climbing animal species including insects, frogs, spiders, lizards and even a mammal.

“We compared animals covering more than seven orders of magnitude in weight, which is roughly the same as comparing a cockroach to the weight of Big Ben, for example,” says Labonte.

These investigations also gave the researchers greater insights into how the size of adhesive footpads is influenced and constrained by the animals’ evolutionary history.

“We were looking at vastly different animals – a spider and a gecko are about as different as a human is to an ant- but if you look at their feet, they have remarkably similar footpads,” says Labonte.

“Adhesive pads of climbing animals are a prime example of convergent evolution – where multiple species have independently, through very different evolutionary histories, arrived at the same solution to a problem. When this happens, it’s a clear sign that it must be a very good solution.”

The researchers believe we can learn from these evolutionary solutions in the development of large-scale manmade adhesives.

“Our study emphasises the importance of scaling for animal adhesion, and scaling is also essential for improving the performance of adhesives over much larger areas. There is a lot of interesting work still to do looking into the strategies that animals have developed in order to maintain the ability to scale smooth walls, which would likely also have very useful applications in the development of large-scale, powerful yet controllable adhesives,” says Labonte.

There is one other possible solution to the problem of how to stick when you’re a large animal, and that’s to make your sticky footpads even stickier.

“We noticed that within closely related species pad size was not increasing fast enough to match body size, probably a result of evolutionary constraints. Yet these animals can still stick to walls,” says Christofer Clemente, a co-author from the University of the Sunshine Coast [Australia].

“Within frogs, we found that they have switched to this second option of making pads stickier rather than bigger. It’s remarkable that we see two different evolutionary solutions to the problem of getting big and sticking to walls,” says Clemente.

“Across all species the problem is solved by evolving relatively bigger pads, but this does not seem possible within closely related species, probably since there is not enough morphological diversity to allow it. Instead, within these closely related groups, pads get stickier. This is a great example of evolutionary constraint and innovation.”

A researcher at Stanford University (US) took strong exception to the Cambridge team’s conclusions , from a Jan. 28, 2016 article by Michael Grothaus for Fast Company (Note: A link has been removed),

It seems the dreams of the web-slinger’s fans were crushed forever—that is until a rival university swooped in and saved the day. A team of engineers working with mechanical engineering graduate student Elliot Hawkes at Stanford University have announced [in 2014] that they’ve invented a device called “gecko gloves” that proves the Cambridge researchers wrong.

Hawkes has created a video outlining the nature of his dispute with Cambridge University and US tv talk show host, Stephen Colbert who featured the Cambridge University research in one of his monologues,

To be fair to Hawkes, he does prove his point. A Nov. 21, 2014 Stanford University report by Bjorn Carey describes Hawke’s ingenious ‘sticky pads,

Each handheld gecko pad is covered with 24 adhesive tiles, and each of these is covered with sawtooth-shape polymer structures each 100 micrometers long (about the width of a human hair).

The pads are connected to special degressive springs, which become less stiff the further they are stretched. This characteristic means that when the springs are pulled upon, they apply an identical force to each adhesive tile and cause the sawtooth-like structures to flatten.

“When the pad first touches the surface, only the tips touch, so it’s not sticky,” said co-author Eric Eason, a graduate student in applied physics. “But when the load is applied, and the wedges turn over and come into contact with the surface, that creates the adhesion force.”

As with actual geckos, the adhesives can be “turned” on and off. Simply release the load tension, and the pad loses its stickiness. “It can attach and detach with very little wasted energy,” Eason said.

The ability of the device to scale up controllable adhesion to support large loads makes it attractive for several applications beyond human climbing, said Mark Cutkosky, the Fletcher Jones Chair in the School of Engineering and senior author on the paper.

“Some of the applications we’re thinking of involve manufacturing robots that lift large glass panels or liquid-crystal displays,” Cutkosky said. “We’re also working on a project with NASA’s Jet Propulsion Laboratory to apply these to the robotic arms of spacecraft that could gently latch on to orbital space debris, such as fuel tanks and solar panels, and move it to an orbital graveyard or pitch it toward Earth to burn up.”

Previous work on synthetic and gecko adhesives showed that adhesive strength decreased as the size increased. In contrast, the engineers have shown that the special springs in their device make it possible to maintain the same adhesive strength at all sizes from a square millimeter to the size of a human hand.

The current version of the device can support about 200 pounds, Hawkes said, but, theoretically, increasing its size by 10 times would allow it to carry almost 2,000 pounds.

Here’s a link to and a citation for the Stanford paper,

Human climbing with efficiently scaled gecko-inspired dry adhesives by Elliot W. Hawkes, Eric V. Eason, David L. Christensen, Mark R. Cutkosky. Jurnal of the Royal Society Interface DOI: 10.1098/rsif.2014.0675 Published 19 November 2014

This paper is open access.

To be fair to the Cambridge researchers, It’s stretching it a bit to say that Hawke’s gecko gloves allow someone to be like Spiderman. That’s a very careful, slow climb achieved in a relatively short period of time. Can the human body remain suspended that way for more than a few minutes? How big do your sticky pads have to be if you’re going to have the same wall-climbing ease of movement and staying power of either a gecko or Spiderman?

Here’s a link to and a citation for the Cambridge paper,

Extreme positive allometry of animal adhesive pads and the size limits of adhesion-based climbing by David Labonte, Christofer J. Clemente, Alex Dittrich, Chi-Yun Kuo, Alfred J. Crosby, Duncan J. Irschick, and Walter Federle. PNAS doi: 10.1073/pnas.1519459113

This paper is behind a paywall but there is an open access preprint version, which may differ from the PNAS version, available,

Extreme positive allometry of animal adhesive pads and the size limits of adhesion-based climbing by David Labonte, Christofer J Clemente, Alex Dittrich, Chi-Yun Kuo, Alfred J Crosby, Duncan J Irschick, Walter Federle. bioRxiv
doi: http://dx.doi.org/10.1101/033845

I hope that if the Cambridge researchers respond, they will be witty rather than huffy. Finally, there’s this gecko image (which I love) from the Cambridge researchers,

 Caption: This image shows a gecko and ant. Credit: Image courtesy of A Hackmann and D Labonte

Caption: This image shows a gecko and ant. Credit: Image courtesy of A Hackmann and D Labonte