Timezone: »
Gaussian processes scale prohibitively with the size of the dataset. In response, many approximation methods have been developed, which inevitably introduce approximation error. This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior. Therefore in practice, GP models are often as much about the approximation method as they are about the data. Here, we develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended. The most common GP approximations map to an instance in this class, such as methods based on the Cholesky factorization, conjugate gradients, and inducing points. For any method in this class, we prove (i) convergence of its posterior mean in the associated RKHS, (ii) decomposability of its combined posterior covariance into mathematical and computational covariances, and (iii) that the combined variance is a tight worst-case bound for the squared error between the method's posterior mean and the latent function. Finally, we empirically demonstrate the consequences of ignoring computational uncertainty and show how implicitly modeling it improves generalization performance on benchmark datasets.
Author Information
Jonathan Wenger (University of Tübingen)
Geoff Pleiss (Columbia University)
Marvin Pförtner (University of Tübingen)
Philipp Hennig (University of Tuebingen)
John Cunningham (Columbia University)
More from the Same Authors
-
2022 : Late-Phase Second-Order Training »
Lukas Tatzel · Philipp Hennig · Frank Schneider -
2022 : The Best Deep Ensembles Sacrifice Predictive Diversity »
Taiga Abe · Estefany Kelly Buchanan · Geoff Pleiss · John Cunningham -
2022 : Denoising Deep Generative Models »
Gabriel Loaiza-Ganem · Brendan Ross · Luhuan Wu · John Cunningham · Jesse Cresswell · Anthony Caterini -
2022 Workshop: Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems »
Alexander Terenin · Elizaveta Semenova · Geoff Pleiss · Zi Wang -
2022 Workshop: Has it Trained Yet? A Workshop for Algorithmic Efficiency in Practical Neural Network Training »
Frank Schneider · Zachary Nado · Philipp Hennig · George Dahl · Naman Agarwal -
2022 Poster: Data Augmentation for Compositional Data: Advancing Predictive Models of the Microbiome »
Elliott Gordon-Rodriguez · Thomas Quinn · John Cunningham -
2022 Poster: Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks »
Agustinus Kristiadi · Runa Eschenhagen · Philipp Hennig -
2022 Poster: Deep Ensembles Work, But Are They Necessary? »
Taiga Abe · Estefany Kelly Buchanan · Geoff Pleiss · Richard Zemel · John Cunningham -
2021 Poster: The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective »
Geoff Pleiss · John Cunningham -
2021 Poster: Posterior Collapse and Latent Variable Non-identifiability »
Yixin Wang · David Blei · John Cunningham -
2021 Poster: Rectangular Flows for Manifold Learning »
Anthony Caterini · Gabriel Loaiza-Ganem · Geoff Pleiss · John Cunningham -
2020 Poster: Deep Graph Pose: a semi-supervised deep graphical model for improved animal pose tracking »
Anqi Wu · Estefany Kelly Buchanan · Matthew Whiteway · Michael Schartner · Guido Meijer · Jean-Paul Noel · Erica Rodriguez · Claire Everett · Amy Norovich · Evan Schaffer · Neeli Mishra · C. Daniel Salzman · Dora Angelaki · Andrés Bendesky · The International Brain Laboratory The International Brain Laboratory · John Cunningham · Liam Paninski -
2020 Poster: Recurrent Switching Dynamical Systems Models for Multiple Interacting Neural Populations »
Joshua Glaser · Matthew Whiteway · John Cunningham · Liam Paninski · Scott Linderman -
2020 Poster: Probabilistic Linear Solvers for Machine Learning »
Jonathan Wenger · Philipp Hennig -
2020 Poster: Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax »
Andres Potapczynski · Gabriel Loaiza-Ganem · John Cunningham -
2019 Poster: Paraphrase Generation with Latent Bag of Words »
Yao Fu · Yansong Feng · John Cunningham -
2019 Poster: BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos »
Eleanor Batty · Matthew Whiteway · Shreya Saxena · Dan Biderman · Taiga Abe · Simon Musall · Winthrop Gillis · Jeffrey Markowitz · Anne Churchland · John Cunningham · Sandeep R Datta · Scott Linderman · Liam Paninski -
2019 Poster: Deep Random Splines for Point Process Intensity Estimation of Neural Population Data »
Gabriel Loaiza-Ganem · Sean Perkins · Karen Schroeder · Mark Churchland · John Cunningham -
2019 Poster: The continuous Bernoulli: fixing a pervasive error in variational autoencoders »
Gabriel Loaiza-Ganem · John Cunningham -
2016 Poster: Linear dynamical neural population models through nonlinear embeddings »
Yuanjun Gao · Evan Archer · Liam Paninski · John Cunningham -
2016 Poster: Automated scalable segmentation of neurons from multispectral images »
Uygar Sümbül · Douglas Roossien · Dawen Cai · Fei Chen · Nicholas Barry · John Cunningham · Edward Boyden · Liam Paninski -
2015 Poster: Bayesian Active Model Selection with an Application to Automated Audiometry »
Jacob Gardner · Gustavo Malkomes · Roman Garnett · Kilian Weinberger · Dennis Barbour · John Cunningham -
2015 Poster: High-dimensional neural spike train analysis with generalized count linear dynamical systems »
Yuanjun Gao · Lars Busing · Krishna V Shenoy · John Cunningham -
2015 Spotlight: High-dimensional neural spike train analysis with generalized count linear dynamical systems »
Yuanjun Gao · Lars Busing · Krishna V Shenoy · John Cunningham