Machine learning has already made a huge impact on financial institutions’ operations, and this is about to be amplified with the adoption of quantum computing. Standard Chartered Bank’s Managing Director, Head of Data Analytics, Electronic Market Solutions and Quant of the Year Alexei Kondratyev shares why quantum machine learning is changing the game for banks.
The new emerging technology of quantum computing promises to revolutionise quantitative finance to the same extent as classical digital computing helped to create it. There are two main reasons for this to happen. First, there is quantum speed-up – an ability of quantum computers, either digital, based on quantum logic gates, or analogue, based on a quantum annealing process, to solve some specific difficult problems at a fraction of time required to solve these problems using their classical counterparts. Second, quantum computers solve problems differently. Different types of algorithms can be run on quantum computers, which may result in not only greater efficiency but also offer a different insight into a problem being solved.
Live from QuantMinds International, watch our interview with Kondratyev:
The current stage of quantum computing development is characterised by the existence of relatively powerful quantum annealers capable of solving complex optimisation problems and by general purpose digital quantum computers making their first steps. Therefore, from the quantitative finance point of view, it is important to prioritise research on the subset of problems that can be solved on both platform types. The natural candidate here is the class of optimisation problems for which we already have well studied classical benchmarks. Optimisation problems form a large class of hard to solve financial problems, not to mention the fact that many supervised and reinforcement learning tools used in finance are trained via solving optimisation problems (minimisation of a cost function, maximisation of reward).
Machine learning is likely to be one of the first areas where quantum computing can demonstrate tangible benefits. Although sufficiently powerful gate model quantum computers may not be available for a decade, quantum annealers have already been successfully used for a number of machine learning tasks.
Quantum annealers are special-purpose machines inspired by the adiabatic quantum computing paradigm. These machines are manufactured by D-Wave Systems and appeared on the market in 2011, and while being limited in programmability with respect to other experimental devices under testing by other companies, are still the only available quantum devices that feature a sufficient amount of quantum memory (qubits) to be applied to non-trivial problems at the time of writing. For this reason they are subject to extensive empirical investigation by several groups around the world, not only for scientific and research purposes, but also for performance evaluation on structured real world optimisation challenges .
An example of discriminative machine learning problems solved using quantum annealers include building a strong classifier from several weak ones for as diverse use cases as Higgs boson detection  and DNA experiments . The strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. The quantum annealing-trained classifier performs comparably to the state-of-the-art machine learning methods that are currently used in particle physics and computational biology. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning and demonstrate some advantage over traditional machine learning methods for small training datasets.
Another application of quantum annealing is in generative learning. In Deep Learning, a well-known approach for training a Deep Neural Network starts with training a generative Deep Belief Network model, typically using Contrastive Divergence (CD), then fine-tuning the weights using backpropagation or other discriminative techniques. However, the generative training can be time consuming due to the slow mixing of Gibbs sampling. An alternative approach that estimates model expectations of Restricted Boltzmann Machines using samples from a D-Wave quantum annealing machine can be used . The quantum sampling-based training approach can achieve comparable or better accuracy with significantly fewer iterations of generative training than conventional CD-based training.
Finally, we need to keep in mind potential impact of quantum speed-up on the relative performance of classically and quantumly-trained machine learning systems. The generally accepted measure of algorithm performance is Time-to-Solution (TTS) – amount of time needed to find solution at the desired confidence level (typically, the 99th percentile). TTS may or may not include various overhead costs depending on the algorithm comparison objectives. This topic was a subject of extensive research in recent years and interesting results were obtained for the discrete portfolio optimisation problems solved using D-Wave quantum annealers and classical benchmarks such as Genetic Algorithm. A two orders of magnitude speed-up was established in  for the D-Wave Quantum Annealer 2000Q™ with respect to Intel® Xeon® CPU E5-1620 v4 processor for the fully connected graph problem, ignoring all computational overheads for the classical benchmark and quantum annealer. However, once additional overhead times required to complete a full run on the D-Wave machine are considered, the advantage disappears, and the TTS results are on par with the classical benchmark.
We should also note a very promising development that could lead to orders of magnitude improvement in the quantum annealing TTS: the transition of the current D-Wave Chimera architecture to the new Pegasus architecture. According to forecasts, the next chip will be able to embed almost 400 logical variables, reducing embedding overhead by at least a factor of 3. The embedding overhead is responsible for a large part of the performance, hence we expect conservatively to gain at least an order of magnitude from this improvement. As the number of physical qubits increases, we could also leverage error-suppression encodings nested within embedding that, while reducing the total number of logical variables, provably improves the probability of success.
Learn more during Kondratyev's session later this afternoon at Quant Tech Summit, part of QuantMinds International.
- Davide Venturelli and Alexei Kondratyev.
"Reverse Quantum Annealing Approach to Portfolio Optimization Problems."
- Alex Mott, Joshua Job, Jean-Roch Vlimant, Daniel Lidar, and Maria Spiropulu.
“Solving a Higgs optimization problem with quantum annealing for machine learning.”
Nature, 550 (7676), 2017.
- Richard Y. Li, Rosa Di Felice, Remo Rohs, and Daniel A. Lidar.
“Quantum annealing versus classical machine learning applied to a simplified computational biology problem.” Nature Partner Journals – Quantum Information,
- Steven H. Adachi and Maxwell P. Henderson.
“Application of Quantum Annealing to Training of Deep Neural Networks.”
arXiv: 1510.06356, 2015.