Timezone: »
Gaussian processes are rich distributions over functions, with generalization properties determined by a kernel function. When used for long-range extrapolation, predictions are particularly sensitive to the choice of kernel parameters. It is therefore critical to account for kernel uncertainty in our predictive distributions. We propose a distribution over kernels formed by modelling a spectral mixture density with a Levy process. The resulting distribution has support for all stationary covariances---including the popular RBF, periodic, and Matern kernels---combined with inductive biases which enable automatic and data efficient learning, long-range extrapolation, and state of the art predictive performance. The proposed model also presents an approach to spectral regularization, as the Levy process introduces a sparsity-inducing prior over mixture components, allowing automatic selection over model order and pruning of extraneous components. We exploit the algebraic structure of the proposed process for O(n) training and O(1) predictions. We perform extrapolations having reasonable uncertainty estimates on several benchmarks, show that the proposed model can recover flexible ground truth covariances and that it is robust to errors in initialization.
Author Information
Phillip Jang (Cornell University)
Andrew Loeb (Cornell University)
Matthew Davidow (Cornell University)
Applied Math Grad Student at Cornell
Andrew Wilson (Cornell University)
More from the Same Authors
-
2019 Workshop: Learning with Rich Experience: Integration of Learning Paradigms »
Zhiting Hu · Andrew Wilson · Chelsea Finn · Lisa Lee · Taylor Berg-Kirkpatrick · Ruslan Salakhutdinov · Eric Xing -
2018 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew Wilson · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2017 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew Wilson · Andrew Wilson · Diederik Kingma · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2017 Poster: Bayesian GAN »
Yunus Saatci · Andrew Wilson -
2017 Spotlight: Bayesian GANs »
Yunus Saatci · Andrew Wilson -
2017 Poster: Bayesian Optimization with Gradients »
Jian Wu · Matthias Poloczek · Andrew Wilson · Peter Frazier -
2017 Poster: Scalable Log Determinants for Gaussian Process Kernel Learning »
Kun Dong · David Eriksson · Hannes Nickisch · David Bindel · Andrew Wilson -
2017 Oral: Bayesian Optimization with Gradients »
Jian Wu · Matthias Poloczek · Andrew Wilson · Peter Frazier