Timezone: »
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection. The Laplace approximation (LA) is a classic, and arguably the simplest family of approximations for the intractable posteriors of deep neural networks. Yet, despite its simplicity, the LA is not as popular as alternatives like variational Bayes or deep ensembles. This may be due to assumptions that the LA is expensive due to the involved Hessian computation, that it is difficult to implement, or that it yields inferior results. In this work we show that these are misconceptions: we (i) review the range of variants of the LA including versions with minimal cost overhead; (ii) introduce "laplace", an easy-to-use software library for PyTorch offering user-friendly access to all major flavors of the LA; and (iii) demonstrate through extensive experiments that the LA is competitive with more popular alternatives in terms of performance, while excelling in terms of computational cost. We hope that this work will serve as a catalyst to a wider adoption of the LA in practical deep learning, including in domains where Bayesian approaches are not typically considered at the moment.
Author Information
Erik Daxberger (University of Cambridge & MPI for Intelligent Systems, Tübingen)
Agustinus Kristiadi (University of Tübingen)
Alexander Immer (ETH Zurich)
Runa Eschenhagen (University of Tuebingen)
Matthias Bauer (DeepMind)
Philipp Hennig (University of Tübingen and MPI Tübingen)
More from the Same Authors
-
2021 Spotlight: An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2021 : Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning »
Runa Eschenhagen · Erik Daxberger · Philipp Hennig · Agustinus Kristiadi -
2021 : Being a Bit Frequentist Improves Bayesian Neural Networks »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2021 : Pathologies in Priors and Inference for Bayesian Transformers »
Tristan Cinquin · Alexander Immer · Max Horn · Vincent Fortuin -
2021 Poster: A Probabilistic State Space Model for Joint Inference from Differential Equations and Data »
Jonathan Schmidt · Nicholas Krämer · Philipp Hennig -
2021 Poster: An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2021 Poster: Linear-Time Probabilistic Solution of Boundary Value Problems »
Nicholas Krämer · Philipp Hennig -
2021 Poster: Cockpit: A Practical Debugging Tool for the Training of Deep Neural Networks »
Frank Schneider · Felix Dangel · Philipp Hennig -
2020 Poster: Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining »
Austin Tripp · Erik Daxberger · José Miguel Hernández-Lobato -
2019 Poster: Approximate Inference Turns Deep Networks into Gaussian Processes »
Mohammad Emtiyaz Khan · Alexander Immer · Ehsan Abedi · Maciej Korzepa -
2016 Workshop: Optimizing the Optimizers »
Maren Mahsereci · Alex Davies · Philipp Hennig -
2015 Workshop: Probabilistic Integration »
Michael A Osborne · Philipp Hennig -
2015 Poster: Probabilistic Line Searches for Stochastic Optimization »
Maren Mahsereci · Philipp Hennig -
2015 Oral: Probabilistic Line Searches for Stochastic Optimization »
Maren Mahsereci · Philipp Hennig -
2014 Poster: Incremental Local Gaussian Regression »
Franziska Meier · Philipp Hennig · Stefan Schaal -
2014 Poster: Probabilistic ODE Solvers with Runge-Kutta Means »
Michael Schober · David Duvenaud · Philipp Hennig -
2014 Poster: Sampling for Inference in Probabilistic Models with Fast Bayesian Quadrature »
Tom Gunter · Michael A Osborne · Roman Garnett · Philipp Hennig · Stephen J Roberts -
2014 Oral: Probabilistic ODE Solvers with Runge-Kutta Means »
Michael Schober · David Duvenaud · Philipp Hennig -
2013 Workshop: Bayesian Optimization in Theory and Practice »
Matthew Hoffman · Jasper Snoek · Nando de Freitas · Michael A Osborne · Ryan Adams · Sebastien Bubeck · Philipp Hennig · Remi Munos · Andreas Krause -
2013 Poster: The Randomized Dependence Coefficient »
David Lopez-Paz · Philipp Hennig · Bernhard Schölkopf -
2012 Workshop: Probabilistic Numerics »
Philipp Hennig · John P Cunningham · Michael A Osborne -
2011 Poster: Optimal Reinforcement Learning for Gaussian Systems »
Philipp Hennig