Timezone: »
While Bayesian neural networks (BNNs) hold the promise of being flexible, well-calibrated statistical models, inference often requires approximations whose consequences are poorly understood. We study the quality of common variational methods in approximating the Bayesian predictive distribution. For single-hidden layer ReLU BNNs, we prove a fundamental limitation in function-space of two of the most commonly used distributions defined in weight-space: mean-field Gaussian and Monte Carlo dropout. We find there are simple cases where neither method can have substantially increased uncertainty in between well-separated regions of low uncertainty. We provide strong empirical evidence that exact inference does not have this pathology, hence it is due to the approximation and not the model. In contrast, for deep networks, we prove a universality result showing that there exist approximate posteriors in the above classes which provide flexible uncertainty estimates. However, we find empirically that pathologies of a similar form as in the single-hidden layer case can persist when performing variational inference in deeper networks. Our results motivate careful consideration of the implications of approximate inference methods in BNNs.
Author Information
Andrew Foong (University of Cambridge)
David Burt (University of Cambridge)
Yingzhen Li (Microsoft Research Cambridge)
Yingzhen Li is a senior researcher at Microsoft Research Cambridge. She received her PhD from the University of Cambridge, and previously she has interned at Disney Research. She is passionate about building reliable machine learning systems, and her approach combines both Bayesian statistics and deep learning. Her contributions to the approximate inference field include: (1) algorithmic advances, such as variational inference with different divergences, combining variational inference with MCMC and approximate inference with implicit distributions; (2) applications of approximate inference, such as uncertainty estimation in Bayesian neural networks and algorithms to train deep generative models. She has served as area chairs at NeurIPS/ICML/ICLR/AISTATS on related research topics, and she is a co-organizer of the AABI2020 symposium, a flagship event of approximate inference.
Richard Turner (University of Cambridge)
More from the Same Authors
-
2020 Poster: Efficient Low Rank Gaussian Variational Inference for Neural Networks »
Marcin Tomczak · Siddharth Swaroop · Richard Turner -
2020 Poster: Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes »
Andrew Foong · Wessel Bruinsma · Jonathan Gordon · Yann Dubois · James Requeima · Richard Turner -
2020 Poster: VAEM: a Deep Generative Model for Heterogeneous Mixed Type Data »
Chao Ma · Sebastian Tschiatschek · Richard Turner · José Miguel Hernández-Lobato · Cheng Zhang -
2020 Poster: Continual Deep Learning by Functional Regularisation of Memorable Past »
Pingbo Pan · Siddharth Swaroop · Alexander Immer · Runa Eschenhagen · Richard Turner · Mohammad Emtiyaz Khan -
2020 Oral: Continual Deep Learning by Functional Regularisation of Memorable Past »
Pingbo Pan · Siddharth Swaroop · Alexander Immer · Runa Eschenhagen · Richard Turner · Mohammad Emtiyaz Khan -
2020 Tutorial: (Track1) Advances in Approximate Inference »
Yingzhen Li · Cheng Zhang -
2019 Poster: Icebreaker: Element-wise Efficient Information Acquisition with a Bayesian Deep Latent Gaussian Model »
Wenbo Gong · Sebastian Tschiatschek · Sebastian Nowozin · Richard Turner · José Miguel Hernández-Lobato · Cheng Zhang -
2019 Poster: Practical Deep Learning with Bayesian Principles »
Kazuki Osawa · Siddharth Swaroop · Mohammad Emtiyaz Khan · Anirudh Jain · Runa Eschenhagen · Richard Turner · Rio Yokota -
2018 Poster: Infinite-Horizon Gaussian Processes »
Arno Solin · James Hensman · Richard Turner -
2018 Poster: Geometrically Coupled Monte Carlo Sampling »
Mark Rowland · Krzysztof Choromanski · François Chalus · Aldo Pacchiano · Tamas Sarlos · Richard Turner · Adrian Weller -
2018 Spotlight: Geometrically Coupled Monte Carlo Sampling »
Mark Rowland · Krzysztof Choromanski · François Chalus · Aldo Pacchiano · Tamas Sarlos · Richard Turner · Adrian Weller -
2017 Poster: Streaming Sparse Gaussian Process Approximations »
Thang Bui · Cuong Nguyen · Richard Turner -
2017 Poster: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning »
Shixiang (Shane) Gu · Timothy Lillicrap · Richard Turner · Zoubin Ghahramani · Bernhard Schölkopf · Sergey Levine -
2016 Poster: Rényi Divergence Variational Inference »
Yingzhen Li · Richard Turner -
2015 Poster: Neural Adaptive Sequential Monte Carlo »
Shixiang (Shane) Gu · Zoubin Ghahramani · Richard Turner -
2015 Poster: Learning Stationary Time Series using Gaussian Processes with Nonparametric Kernels »
Felipe Tobar · Thang Bui · Richard Turner -
2015 Poster: Stochastic Expectation Propagation »
Yingzhen Li · José Miguel Hernández-Lobato · Richard Turner -
2015 Spotlight: Learning Stationary Time Series using Gaussian Processes with Nonparametric Kernels »
Felipe Tobar · Thang Bui · Richard Turner -
2015 Spotlight: Stochastic Expectation Propagation »
Yingzhen Li · José Miguel Hernández-Lobato · Richard Turner -
2014 Poster: Tree-structured Gaussian Process Approximations »
Thang Bui · Richard Turner -
2014 Spotlight: Tree-structured Gaussian Process Approximations »
Thang Bui · Richard Turner -
2011 Poster: Probabilistic amplitude and frequency demodulation »
Richard Turner · Maneesh Sahani -
2011 Spotlight: Probabilistic amplitude and frequency demodulation »
Richard Turner · Maneesh Sahani -
2009 Poster: Occlusive Components Analysis »
Jörg Lücke · Richard Turner · Maneesh Sahani · Marc Henniges -
2007 Workshop: Beyond Simple Cells: Probabilistic Models for Visual Cortical Processing »
Richard Turner · Pietro Berkes · Maneesh Sahani -
2007 Poster: Modeling Natural Sounds with Modulation Cascade Processes »
Richard Turner · Maneesh Sahani -
2007 Poster: On Sparsity and Overcompleteness in Image Models »
Pietro Berkes · Richard Turner · Maneesh Sahani