Tag Archives: Jacob Biamonte

Machine learning software and quantum computers that think

A Sept. 14, 2017 news item on phys.org sets the stage for quantum machine learning by explaining a few basics first,

Language acquisition in young children is apparently connected with their ability to detect patterns. In their learning process, they search for patterns in the data set that help them identify and optimize grammar structures in order to properly acquire the language. Likewise, online translators use algorithms through machine learning techniques to optimize their translation engines to produce well-rounded and understandable outcomes. Even though many translations did not make much sense at all at the beginning, in these past years we have been able to see major improvements thanks to machine learning.

Machine learning techniques use mathematical algorithms and tools to search for patterns in data. These techniques have become powerful tools for many different applications, which can range from biomedical uses such as in cancer reconnaissance, in genetics and genomics, in autism monitoring and diagnosis and even plastic surgery, to pure applied physics, for studying the nature of materials, matter or even complex quantum systems.

Capable of adapting and changing when exposed to a new set of data, machine learning can identify patterns, often outperforming humans in accuracy. Although machine learning is a powerful tool, certain application domains remain out of reach due to complexity or other aspects that rule out the use of the predictions that learning algorithms provide.

Thus, in recent years, quantum machine learning has become a matter of interest because of is vast potential as a possible solution to these unresolvable challenges and quantum computers show to be the right tool for its solution.

A Sept. 14, 2017 Institute of Photonic Sciences ([Catalan] Institut de Ciències Fotòniques] ICFO) press release, which originated the news item, goes on to detail a recently published overview of the state of quantum machine learning,

In a recent study, published in Nature, an international team of researchers integrated by Jacob Biamonte from Skoltech/IQC, Peter Wittek from ICFO, Nicola Pancotti from MPQ, Patrick Rebentrost from MIT, Nathan Wiebe from Microsoft Research, and Seth Lloyd from MIT have reviewed the actual status of classical machine learning and quantum machine learning. In their review, they have thoroughly addressed different scenarios dealing with classical and quantum machine learning. In their study, they have considered different possible combinations: the conventional method of using classical machine learning to analyse classical data, using quantum machine learning to analyse both classical and quantum data, and finally, using classical machine learning to analyse quantum data.

Firstly, they set out to give an in-depth view of the status of current supervised and unsupervised learning protocols in classical machine learning by stating all applied methods. They introduce quantum machine learning and provide an extensive approach on how this technique could be used to analyse both classical and quantum data, emphasizing that quantum machines could accelerate processing timescales thanks to the use of quantum annealers and universal quantum computers. Quantum annealing technology has better scalability, but more limited use cases. For instance, the latest iteration of D-Wave’s [emphasis mine] superconducting chip integrates two thousand qubits, and it is used for solving certain hard optimization problems and for efficient sampling. On the other hand, universal (also called gate-based) quantum computers are harder to scale up, but they are able to perform arbitrary unitary operations on qubits by sequences of quantum logic gates. This resembles how digital computers can perform arbitrary logical operations on classical bits.

However, they address the fact that controlling a quantum system is very complex and analyzing classical data with quantum resources is not as straightforward as one may think, mainly due to the challenge of building quantum interface devices that allow classical information to be encoded into a quantum mechanical form. Difficulties, such as the “input” or “output” problems appear to be the major technical challenge that needs to be overcome.

The ultimate goal is to find the most optimized method that is able to read, comprehend and obtain the best outcomes of a data set, be it classical or quantum. Quantum machine learning is definitely aimed at revolutionizing the field of computer sciences, not only because it will be able to control quantum computers, speed up the information processing rates far beyond current classical velocities, but also because it is capable of carrying out innovative functions, such quantum deep learning, that could not only recognize counter-intuitive patterns in data, invisible to both classical machine learning and to the human eye, but also reproduce them.

As Peter Wittek [emphasis mine] finally states, “Writing this paper was quite a challenge: we had a committee of six co-authors with different ideas about what the field is, where it is now, and where it is going. We rewrote the paper from scratch three times. The final version could not have been completed without the dedication of our editor, to whom we are indebted.”

It was a bit of a surprise to see local (Vancouver, Canada) company D-Wave Systems mentioned but i notice that one of the paper’s authors (Peter Wittek) is mentioned in a May 22, 2017 D-Wave news release announcing a new partnership to foster quantum machine learning,

Today [May 22, 2017] D-Wave Systems Inc., the leader in quantum computing systems and software, announced a new initiative with the Creative Destruction Lab (CDL) at the University of Toronto’s Rotman School of Management. D-Wave will work with CDL, as a CDL Partner, to create a new track to foster startups focused on quantum machine learning. The new track will complement CDL’s successful existing track in machine learning. Applicants selected for the intensive one-year program will go through an introductory boot camp led by Dr. Peter Wittek [emphasis mine], author of Quantum Machine Learning: What Quantum Computing means to Data Mining, with instruction and technical support from D-Wave experts, access to a D-Wave 2000Q™ quantum computer, and the opportunity to use a D-Wave sampling service to enable machine learning computations and applications. D-Wave staff will be a part of the committee selecting up to 40 individuals for the program, which begins in September 2017.

For anyone interested in the paper, here’s a link to and a citation,

Quantum machine learning by Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, & Seth Lloyd. Nature 549, 195–202 (14 September 2017) doi:10.1038/nature23474 Published online 13 September 2017

This paper is behind a paywall.

3D picture language for mathematics

There’s a new, 3D picture language for mathematics called ‘quon’ according to a March 3, 2017 news item on phys.org,

Galileo called mathematics the “language with which God wrote the universe.” He described a picture-language, and now that language has a new dimension.

The Harvard trio of Arthur Jaffe, the Landon T. Clay Professor of Mathematics and Theoretical Science, postdoctoral fellow Zhengwei Liu, and researcher Alex Wozniakowski has developed a 3-D picture-language for mathematics with potential as a tool across a range of topics, from pure math to physics.

Though not the first pictorial language of mathematics, the new one, called quon, holds promise for being able to transmit not only complex concepts, but also vast amounts of detail in relatively simple images. …

A March 2, 2017 Harvard University news release by Peter Reuell, which originated the news item, provides more context for the research,

“It’s a big deal,” said Jacob Biamonte of the Quantum Complexity Science Initiative after reading the research. “The paper will set a new foundation for a vast topic.”

“This paper is the result of work we’ve been doing for the past year and a half, and we regard this as the start of something new and exciting,” Jaffe said. “It seems to be the tip of an iceberg. We invented our language to solve a problem in quantum information, but we have already found that this language led us to the discovery of new mathematical results in other areas of mathematics. We expect that it will also have interesting applications in physics.”

When it comes to the “language” of mathematics, humans start with the basics — by learning their numbers. As we get older, however, things become more complex.

“We learn to use algebra, and we use letters to represent variables or other values that might be altered,” Liu said. “Now, when we look at research work, we see fewer numbers and more letters and formulas. One of our aims is to replace ‘symbol proof’ by ‘picture proof.’”

The new language relies on images to convey the same information that is found in traditional algebraic equations — and in some cases, even more.

“An image can contain information that is very hard to describe algebraically,” Liu said. “It is very easy to transmit meaning through an image, and easy for people to understand what they see in an image, so we visualize these concepts and instead of words or letters can communicate via pictures.”

“So this pictorial language for mathematics can give you insights and a way of thinking that you don’t see in the usual, algebraic way of approaching mathematics,” Jaffe said. “For centuries there has been a great deal of interaction between mathematics and physics because people were thinking about the same things, but from different points of view. When we put the two subjects together, we found many new insights, and this new language can take that into another dimension.”

In their most recent work, the researchers moved their language into a more literal realm, creating 3-D images that, when manipulated, can trigger mathematical insights.

“Where before we had been working in two dimensions, we now see that it’s valuable to have a language that’s Lego-like, and in three dimensions,” Jaffe said. “By pushing these pictures around, or working with them like an object you can deform, the images can have different mathematical meanings, and in that way we can create equations.”

Among their pictorial feats, Jaffe said, are the complex equations used to describe quantum teleportation. The researchers have pictures for the Pauli matrices, which are fundamental components of quantum information protocols. This shows that the standard protocols are topological, and also leads to discovery of new protocols.

“It turns out one picture is worth 1,000 symbols,” Jaffe said.

“We could describe this algebraically, and it might require an entire page of equations,” Liu added. “But we can do that in one picture, so it can capture a lot of information.”

Having found a fit with quantum information, the researchers are now exploring how their language might also be useful in a number of other subjects in mathematics and physics.

“We don’t want to make claims at this point,” Jaffe said, “but we believe and are thinking about quite a few other areas where this picture-language could be important.”

Sadly, there are no artistic images illustrating quon but this is from the paper,

An n-quon is represented by n hemispheres. We call the flat disc on the boundary of each hemisphere a boundary disc. Each hemisphere contains a neutral diagram with four boundary points on its boundary disk. The dotted box designates the internal structure that specifies the quon vector. For example, the 3-quon is represented as

Courtesy: PNAS and Harvard University

I gather the term ‘quon’ is meant to suggest quantum particles.

Here’s a link and a citation for the paper,

Quon 3D language for quantum information by Zhengwei Liu, Alex Wozniakowski, and Arthur M. Jaffe. Proceedins of the National Academy of Sciences Published online before print February 6, 2017, doi: 10.1073/pnas.1621345114 PNAS March 7, 2017 vol. 114 no. 10

This paper appears to be open access.