Quantum machine learning achieves advantage in IBM research

19 Jul 2021

Image: © James Thew/Stock.adobe.com

In a new paper by IBM, quantum machine learning was able to discern patterns where classical computers missed the signal in the noise.

Quantum computing is a field full of promise but has yet to prove many of its supposed advantages. IBM is confident that quantum advantage will come to fruition but is still working away to establish the proof in the pudding.

Its worldwide machines have shown quantum superiority in other domains, but machine learning is still in the works. However, a new research paper from IBM Quantum has tackled a central question related to quantum machine learning: which quantum algorithms are currently capable of delivering a provable quantum advantage over classical machine learning algorithms?

Proposals in quantum machine learning are often driven by the challenge to find algorithms that can be tested in the near-term with only conventional access to data.

One such class of algorithms is the proposal for quantum enhanced feature spaces, also known as quantum kernel methods. In these set-ups, the quantum computer steps in for just part of the process in the algorithm.

These were the focus of IBM’s latest research.

An area of potential use for these kernel methods is classification problems. These are one of the most fundamental problems in machine learning.

They begin by training an algorithm on data, called a training set, where data points include one of two labels. Following the training phase is a testing phase where the algorithm needs to classify a new data point that has not been seen before.

A standard example is giving a computer pictures of dogs and cats, and from this dataset it classifies all future images as a dog or a cat.

Ultimately, the goal of an efficient machine learning algorithm for classification should be to generate an accurate label in an amount of time that scales polynomially with the size of the input.

In this case, the researchers started with a conventional machine learning model to learn the kernel function, which finds the relevant features in the data to use for classification.

The quantum advantage comes from the fact that the researchers were able to construct a family of datasets for which only quantum computers can recognise the intrinsic labelling patterns.

The researchers used a problem that separates classical and quantum computation: computing logarithms in a cyclic group, where you can generate all the members of the group using a single mathematical operation.

For classical computers, the dataset looked like meaningless noise, whereas the quantum computers were able to work through the data.

The team demonstrated this by constructing a family of classification problems and showed that no efficient classical algorithm could do better than random guessing when attempting to solve these problems.

They also constructed a quantum feature map. This is a way to view complicated data in a higher-dimensional space to pull out patterns. When used alongside the corresponding kernel function, the researchers were able to predict the labels with high accuracy.

What’s more, they could show that the high accuracy persists in the presence of finite sampling noise from taking measurements, a form of noise that needs to be considered even for fault-tolerant quantum computers.

IBM highlighted that there will still be many real-life problems for which this quantum algorithm does not perform better than conventional classical machine learning algorithms.

To obtain a quantum advantage, the classification problem must stick to this cyclical structure described above. This is an important caveat, as IBM’s further research will be aimed at discussing how generalisable this structure is.

Nevertheless, the researchers were confident that this proves an end-to-end quantum speed-up for a quantum kernel method implemented fault-tolerantly with realistic assumptions.

Another potential sticking point could be the hardware limitations of modern quantum computers.

With the field rapidly advancing and ever-bigger offerings being set up across the globe however, solving this limitation might just be a case of when rather than if.

Sam Cox is a journalist at Silicon Republic covering sci-tech news

editorial@siliconrepublic.com