Timezone: »
Deep neural networks are prone to overconfident predictions on outliers. Bayesian neural networks and deep ensembles have both been shown to mitigate this problem to some extent. In this work, we aim to combine the benefits of the two approaches by proposing to predict with a Gaussian mixture model posterior that consists of a weighted sum of Laplace approximations of independently trained deep neural networks. The method can be used \textit{post hoc} with any set of pre-trained networks and only requires a small computational and memory overhead compared to regular ensembles. We theoretically validate that our approach mitigates overconfidence ``far away'' from the training data and empirically compare against state-of-the-art baselines on standard uncertainty quantification benchmarks.
Author Information
Runa Eschenhagen (University of Tuebingen)
Erik Daxberger (University of Cambridge & MPI for Intelligent Systems, Tübingen)
Philipp Hennig (University of Tübingen and MPI Tübingen)
Agustinus Kristiadi (University of Tübingen)
More from the Same Authors
-
2021 Spotlight: An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2021 : Being a Bit Frequentist Improves Bayesian Neural Networks »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2021 Poster: Laplace Redux - Effortless Bayesian Deep Learning »
Erik Daxberger · Agustinus Kristiadi · Alexander Immer · Runa Eschenhagen · Matthias Bauer · Philipp Hennig -
2021 Poster: A Probabilistic State Space Model for Joint Inference from Differential Equations and Data »
Jonathan Schmidt · Nicholas Krämer · Philipp Hennig -
2021 Poster: An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2021 Poster: Linear-Time Probabilistic Solution of Boundary Value Problems »
Nicholas Krämer · Philipp Hennig -
2021 Poster: Cockpit: A Practical Debugging Tool for the Training of Deep Neural Networks »
Frank Schneider · Felix Dangel · Philipp Hennig -
2020 Poster: Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining »
Austin Tripp · Erik Daxberger · José Miguel Hernández-Lobato -
2016 Workshop: Optimizing the Optimizers »
Maren Mahsereci · Alex Davies · Philipp Hennig -
2015 Workshop: Probabilistic Integration »
Michael A Osborne · Philipp Hennig -
2015 Poster: Probabilistic Line Searches for Stochastic Optimization »
Maren Mahsereci · Philipp Hennig -
2015 Oral: Probabilistic Line Searches for Stochastic Optimization »
Maren Mahsereci · Philipp Hennig -
2014 Poster: Incremental Local Gaussian Regression »
Franziska Meier · Philipp Hennig · Stefan Schaal -
2014 Poster: Probabilistic ODE Solvers with Runge-Kutta Means »
Michael Schober · David Duvenaud · Philipp Hennig -
2014 Poster: Sampling for Inference in Probabilistic Models with Fast Bayesian Quadrature »
Tom Gunter · Michael A Osborne · Roman Garnett · Philipp Hennig · Stephen J Roberts -
2014 Oral: Probabilistic ODE Solvers with Runge-Kutta Means »
Michael Schober · David Duvenaud · Philipp Hennig -
2013 Workshop: Bayesian Optimization in Theory and Practice »
Matthew Hoffman · Jasper Snoek · Nando de Freitas · Michael A Osborne · Ryan Adams · Sebastien Bubeck · Philipp Hennig · Remi Munos · Andreas Krause -
2013 Poster: The Randomized Dependence Coefficient »
David Lopez-Paz · Philipp Hennig · Bernhard Schölkopf -
2012 Workshop: Probabilistic Numerics »
Philipp Hennig · John P Cunningham · Michael A Osborne -
2011 Poster: Optimal Reinforcement Learning for Gaussian Systems »
Philipp Hennig