Skip to yearly menu bar Skip to main content


Oral Poster

Sampling from Gaussian Process Posteriors using Stochastic Gradient Descent

Jihao Andreas Lin · Javier Antorán · Shreyas Padhy · David Janz · José Miguel Hernández-Lobato · Alexander Terenin

Great Hall & Hall B1+B2 (level 1) #1221
[ ]
[ Paper [ Poster [ OpenReview
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST
 
Oral presentation: Oral 5C Probability/Sampling
Thu 14 Dec 8 a.m. PST — 8:45 a.m. PST

Abstract:

Gaussian processes are a powerful framework for quantifying uncertainty and for sequential decision-making but are limited by the requirement of solving linear systems. In general, this has a cubic cost in dataset size and is sensitive to conditioning. We explore stochastic gradient algorithms as a computationally efficient method of approximately solving these linear systems: we develop low-variance optimization objectives for sampling from the posterior and extend these to inducing points. Counterintuitively, stochastic gradient descent often produces accurate predictions, even in cases where it does not converge quickly to the optimum. We explain this through a spectral characterization of the implicit bias from non-convergence. We show that stochastic gradient descent produces predictive distributions close to the true posterior both in regions with sufficient data coverage, and in regions sufficiently far away from the data. Experimentally, stochastic gradient descent achieves state-of-the-art performance on sufficiently large-scale or ill-conditioned regression tasks. Its uncertainty estimates match the performance of significantly more expensive baselines on a large-scale Bayesian~optimization~task.

Chat is not available.