Timezone: »
It has been hypothesized that quantum computers may lend themselves well to applications in machine learning. In the present work, we analyze function classes defined via quantum kernels. Quantum computers offer the possibility to efficiently compute inner products of exponentially large density operators that are classically hard to compute. However, having an exponentially large feature space renders the problem of generalization hard. Furthermore, being able to evaluate inner products in high dimensional spaces efficiently by itself does not guarantee a quantum advantage, as already classically tractable kernels can correspond to high- or infinite-dimensional reproducing kernel Hilbert spaces (RKHS). We analyze the spectral properties of quantum kernels and find that we can expect an advantage if their RKHS is low dimensional and contains functions that are hard to compute classically. If the target function is known to lie in this class, this implies a quantum advantage, as the quantum computer can encode this inductive bias, whereas there is no classically efficient way to constrain the function class in the same way. However, we show that finding suitable quantum kernels is not easy because the kernel evaluation might require exponentially many measurements. In conclusion, our message is a somewhat sobering one: we conjecture that quantum machine learning models can offer speed-ups only if we manage to encode knowledge about the problem at hand into quantum circuits, while encoding the same bias into a classical model would be hard. These situations may plausibly occur when learning on data generated by a quantum process, however, they appear to be harder to come by for classical datasets.
Author Information
Jonas Kübler (MPI for Intelligent Systems, Tübingen)
Simon Buchholz (Max-Planck Institute)
Bernhard Schölkopf (MPI for Intelligent Systems, Tübingen)
More from the Same Authors
-
2021 Spotlight: Iterative Teaching by Label Synthesis »
Weiyang Liu · Zhen Liu · Hanchen Wang · Liam Paull · Bernhard Schölkopf · Adrian Weller -
2021 Spotlight: DiBS: Differentiable Bayesian Structure Learning »
Lars Lorch · Jonas Rothfuss · Bernhard Schölkopf · Andreas Krause -
2022 : A Causal Framework to Quantify Robustness of Mathematical Reasoning with Language Models »
Alessandro Stolfo · Zhijing Jin · Kumar Shridhar · Bernhard Schölkopf · Mrinmaya Sachan -
2022 Poster: AutoML Two-Sample Test »
Jonas M. Kübler · Vincent Stimper · Simon Buchholz · Krikamol Muandet · Bernhard Schölkopf -
2022 Poster: Function Classes for Identifiable Nonlinear Independent Component Analysis »
Simon Buchholz · Michel Besserve · Bernhard Schölkopf -
2021 : Boxhead: A Dataset for Learning Hierarchical Representations »
Yukun Chen · Andrea Dittadi · Frederik Träuble · Stefan Bauer · Bernhard Schölkopf -
2021 Poster: Dynamic Inference with Neural Interpreters »
Nasim Rahaman · Muhammad Waleed Gondal · Shruti Joshi · Peter Gehler · Yoshua Bengio · Francesco Locatello · Bernhard Schölkopf -
2021 Poster: Causal Influence Detection for Improving Efficiency in Reinforcement Learning »
Maximilian Seitzer · Bernhard Schölkopf · Georg Martius -
2021 Poster: Independent mechanism analysis, a new concept? »
Luigi Gresele · Julius von Kügelgen · Vincent Stimper · Bernhard Schölkopf · Michel Besserve -
2021 Poster: Iterative Teaching by Label Synthesis »
Weiyang Liu · Zhen Liu · Hanchen Wang · Liam Paull · Bernhard Schölkopf · Adrian Weller -
2021 Poster: Backward-Compatible Prediction Updates: A Probabilistic Approach »
Frederik Träuble · Julius von Kügelgen · Matthäus Kleindessner · Francesco Locatello · Bernhard Schölkopf · Peter Gehler -
2021 Poster: Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style »
Julius von Kügelgen · Yash Sharma · Luigi Gresele · Wieland Brendel · Bernhard Schölkopf · Michel Besserve · Francesco Locatello -
2021 Poster: DiBS: Differentiable Bayesian Structure Learning »
Lars Lorch · Jonas Rothfuss · Bernhard Schölkopf · Andreas Krause -
2021 Poster: Regret Bounds for Gaussian-Process Optimization in Large Domains »
Manuel Wuethrich · Bernhard Schölkopf · Andreas Krause -
2020 Poster: Learning Kernel Tests Without Data Splitting »
Jonas Kübler · Wittawat Jitkrittum · Bernhard Schölkopf · Krikamol Muandet -
2019 : Bernhard Schölkopf »
Bernhard Schölkopf -
2018 : Learning Independent Mechanisms »
Bernhard Schölkopf