Timezone: »
For applications as varied as Bayesian neural networks, determinantal point processes, elliptical graphical models, and kernel learning for Gaussian processes (GPs), one must compute a log determinant of an n by n positive definite matrix, and its derivatives---leading to prohibitive O(n^3) computations. We propose novel O(n) approaches to estimating these quantities from only fast matrix vector multiplications (MVMs). These stochastic approximations are based on Chebyshev, Lanczos, and surrogate models, and converge quickly even for kernel matrices that have challenging spectra. We leverage these approximations to develop a scalable Gaussian process approach to kernel learning. We find that Lanczos is generally superior to Chebyshev for kernel learning, and that a surrogate approach can be highly efficient and accurate with popular kernels.
Author Information
Kun Dong (Cornell University)
David Eriksson (Cornell University)
Hannes Nickisch (Philips Research)
David Bindel (Cornell University)
Andrew Wilson (Cornell University)
More from the Same Authors
-
2023 Poster: Variational Gaussian Processes with Decoupled Conditionals »
Xinran Zhu · Kaiwen Wu · Natalie Maus · Jacob Gardner · David Bindel -
2021 Poster: Scaling Gaussian Processes with Derivative Information Using Variational Inference »
Misha Padidar · Xinran Zhu · Leo Huang · Jacob Gardner · David Bindel -
2019 Workshop: Learning with Rich Experience: Integration of Learning Paradigms »
Zhiting Hu · Andrew Wilson · Chelsea Finn · Lisa Lee · Taylor Berg-Kirkpatrick · Ruslan Salakhutdinov · Eric Xing -
2018 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew Wilson · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2018 Poster: Scaling Gaussian Process Regression with Derivatives »
David Eriksson · Kun Dong · Eric Lee · David Bindel · Andrew Wilson -
2018 Poster: GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration »
Jacob Gardner · Geoff Pleiss · Kilian Weinberger · David Bindel · Andrew Wilson -
2018 Spotlight: GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration »
Jacob Gardner · Geoff Pleiss · Kilian Weinberger · David Bindel · Andrew Wilson -
2017 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew Wilson · Andrew Wilson · Diederik Kingma · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2017 Poster: Bayesian GAN »
Yunus Saatci · Andrew Wilson -
2017 Spotlight: Bayesian GANs »
Yunus Saatci · Andrew Wilson -
2017 Poster: Bayesian Optimization with Gradients »
Jian Wu · Matthias Poloczek · Andrew Wilson · Peter Frazier -
2017 Oral: Bayesian Optimization with Gradients »
Jian Wu · Matthias Poloczek · Andrew Wilson · Peter Frazier -
2017 Poster: Scalable Levy Process Priors for Spectral Kernel Learning »
Phillip Jang · Andrew Loeb · Matthew Davidow · Andrew Wilson -
2015 Poster: Robust Spectral Inference for Joint Stochastic Matrix Factorization »
Moontae Lee · David Bindel · David Mimno -
2011 Poster: Additive Gaussian Processes »
David Duvenaud · Hannes Nickisch · Carl Edward Rasmussen -
2008 Poster: Bayesian Experimental Design of Magnetic Resonance Imaging Sequences »
Matthias Seeger · Hannes Nickisch · Rolf Pohmann · Bernhard Schölkopf -
2008 Spotlight: Bayesian Experimental Design of Magnetic Resonance Imaging Sequences »
Matthias Seeger · Hannes Nickisch · Rolf Pohmann · Bernhard Schölkopf