Timezone: »
Monte Carlo (MC) integration is the de facto method for approximating the predictive distribution of Bayesian neural networks (BNNs). But, even with many MC samples, Gaussian-based BNNs could still yield bad predictive performance due to the posterior approximation's error. Meanwhile, alternatives to MC integration are expensive. In this work, we experimentally show that the key to good MC-approximated predictive distributions is the quality of the approximate posterior itself. However, previous methods for obtaining accurate posterior approximations are expensive and non-trivial to implement. We, therefore, propose to refine Gaussian approximate posteriors with normalizing flows. When applied to last-layer BNNs, it yields a simple, cost-efficient, post hoc method for improving pre-existing parametric approximations. We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.
Author Information
Agustinus Kristiadi (University of Tübingen)
Runa Eschenhagen (University of Tübingen)
Philipp Hennig (University of Tuebingen)
More from the Same Authors
-
2021 Spotlight: An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2021 : Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning »
Runa Eschenhagen · Erik Daxberger · Philipp Hennig · Agustinus Kristiadi -
2021 : Being a Bit Frequentist Improves Bayesian Neural Networks »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2022 : Late-Phase Second-Order Training »
Lukas Tatzel · Philipp Hennig · Frank Schneider -
2022 Workshop: Has it Trained Yet? A Workshop for Algorithmic Efficiency in Practical Neural Network Training »
Frank Schneider · Zachary Nado · Philipp Hennig · George Dahl · Naman Agarwal -
2022 Poster: Posterior and Computational Uncertainty in Gaussian Processes »
Jonathan Wenger · Geoff Pleiss · Marvin Pförtner · Philipp Hennig · John Cunningham -
2021 Poster: Laplace Redux - Effortless Bayesian Deep Learning »
Erik Daxberger · Agustinus Kristiadi · Alexander Immer · Runa Eschenhagen · Matthias Bauer · Philipp Hennig -
2021 Poster: An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig