Timezone: »
While Bayesian neural networks (BNNs) hold the promise of being flexible, well-calibrated statistical models, inference often requires approximations whose consequences are poorly understood. We study the quality of common variational methods in approximating the Bayesian predictive distribution. For single-hidden layer ReLU BNNs, we prove a fundamental limitation in function-space of two of the most commonly used distributions defined in weight-space: mean-field Gaussian and Monte Carlo dropout. We find there are simple cases where neither method can have substantially increased uncertainty in between well-separated regions of low uncertainty. We provide strong empirical evidence that exact inference does not have this pathology, hence it is due to the approximation and not the model. In contrast, for deep networks, we prove a universality result showing that there exist approximate posteriors in the above classes which provide flexible uncertainty estimates. However, we find empirically that pathologies of a similar form as in the single-hidden layer case can persist when performing variational inference in deeper networks. Our results motivate careful consideration of the implications of approximate inference methods in BNNs.
Author Information
Andrew Foong (University of Cambridge)
I am a PhD student in the Machine Learning Group at the University of Cambridge, supervised by Professor Richard E. Turner, and advised by Dr. José Miguel Hernández-Lobato. I started my PhD in October 2018. My research focuses on the intersection of probabilistic modelling and deep learning, with work on Bayesian neural networks, meta-learning, modelling equivariance, and PAC-Bayes.
David Burt (University of Cambridge)
Yingzhen Li (Microsoft Research Cambridge)
Yingzhen Li is a senior researcher at Microsoft Research Cambridge. She received her PhD from the University of Cambridge, and previously she has interned at Disney Research. She is passionate about building reliable machine learning systems, and her approach combines both Bayesian statistics and deep learning. Her contributions to the approximate inference field include: (1) algorithmic advances, such as variational inference with different divergences, combining variational inference with MCMC and approximate inference with implicit distributions; (2) applications of approximate inference, such as uncertainty estimation in Bayesian neural networks and algorithms to train deep generative models. She has served as area chairs at NeurIPS/ICML/ICLR/AISTATS on related research topics, and she is a co-organizer of the AABI2020 symposium, a flagship event of approximate inference.
Richard Turner (University of Cambridge)
More from the Same Authors
-
2021 : Accurate Imputation and Efficient Data Acquisitionwith Transformer-based VAEs »
Sarah Lewis · Tatiana Matejovicova · Yingzhen Li · Angus Lamb · Yordan Zaykov · Miltiadis Allamanis · Cheng Zhang -
2021 : Accurate Imputation and Efficient Data Acquisitionwith Transformer-based VAEs »
Sarah Lewis · Tatiana Matejovicova · Yingzhen Li · Angus Lamb · Yordan Zaykov · Miltiadis Allamanis · Cheng Zhang -
2022 Poster: Scalable Infomin Learning »
Yanzhi Chen · weihao sun · Yingzhen Li · Adrian Weller -
2022 : Ice Core Dating using Probabilistic Programming »
Aditya Ravuri · Tom Andersson · Ieva Kazlauskaite · William Tebbutt · Richard Turner · Scott Hosking · Neil Lawrence · Markus Kaiser -
2022 : Active Learning with Convolutional Gaussian Neural Processes for Environmental Sensor Placement »
Tom Andersson · Wessel Bruinsma · Efstratios Markou · Daniel C. Jones · Scott Hosking · James Requeima · Anna Vaughan · Anna-Louise Ellis · Matthew Lazzara · Richard Turner -
2022 : Contextual Squeeze-and-Excitation »
Massimiliano Patacchiola · John Bronskill · Aliaksandra Shysheya · Katja Hofmann · Sebastian Nowozin · Richard Turner -
2022 : FiT: Parameter Efficient Few-shot Transfer Learning »
Aliaksandra Shysheya · John Bronskill · Massimiliano Patacchiola · Sebastian Nowozin · Richard Turner -
2022 : Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners »
Elre Oldewage · John Bronskill · Richard Turner -
2022 : Panel »
Erin Grant · Richard Turner · Neil Houlsby · Priyanka Agrawal · Abhijeet Awasthi · Salomey Osei -
2022 : Poster session 1 »
Yingzhen Li -
2022 Workshop: NeurIPS 2022 Workshop on Score-Based Methods »
Yingzhen Li · Yang Song · Valentin De Bortoli · Francois-Xavier Briol · Wenbo Gong · Alexia Jolicoeur-Martineau · Arash Vahdat -
2022 Poster: Repairing Neural Networks by Leaving the Right Past Behind »
Ryutaro Tanno · Melanie F. Pradier · Aditya Nori · Yingzhen Li -
2022 Poster: Contextual Squeeze-and-Excitation for Efficient Few-Shot Image Classification »
Massimiliano Patacchiola · John Bronskill · Aliaksandra Shysheya · Katja Hofmann · Sebastian Nowozin · Richard Turner -
2022 Poster: Learning Neural Set Functions Under the Optimal Subset Oracle »
Zijing Ou · Tingyang Xu · Qinliang Su · Yingzhen Li · Peilin Zhao · Yatao Bian -
2021 Workshop: Bayesian Deep Learning »
Yarin Gal · Yingzhen Li · Sebastian Farquhar · Christos Louizos · Eric Nalisnick · Andrew Gordon Wilson · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2021 Poster: Sparse Uncertainty Representation in Deep Learning with Inducing Weights »
Hippolyt Ritter · Martin Kukla · Cheng Zhang · Yingzhen Li -
2021 : Evaluating Approximate Inference in Bayesian Deep Learning + Q&A »
Andrew Gordon Wilson · Pavel Izmailov · Matthew Hoffman · Yarin Gal · Yingzhen Li · Melanie F. Pradier · Sharad Vikram · Andrew Foong · Sanae Lotfi · Sebastian Farquhar -
2021 Poster: How Tight Can PAC-Bayes be in the Small Data Regime? »
Andrew Foong · Wessel Bruinsma · David Burt · Richard Turner -
2021 Poster: Collapsed Variational Bounds for Bayesian Neural Networks »
Marcin Tomczak · Siddharth Swaroop · Andrew Foong · Richard Turner -
2021 Poster: Memory Efficient Meta-Learning with Large Images »
John Bronskill · Daniela Massiceti · Massimiliano Patacchiola · Katja Hofmann · Sebastian Nowozin · Richard Turner -
2020 Poster: Efficient Low Rank Gaussian Variational Inference for Neural Networks »
Marcin Tomczak · Siddharth Swaroop · Richard Turner -
2020 Poster: Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes »
Andrew Foong · Wessel Bruinsma · Jonathan Gordon · Yann Dubois · James Requeima · Richard Turner -
2020 Poster: VAEM: a Deep Generative Model for Heterogeneous Mixed Type Data »
Chao Ma · Sebastian Tschiatschek · Richard Turner · José Miguel Hernández-Lobato · Cheng Zhang -
2020 Poster: Continual Deep Learning by Functional Regularisation of Memorable Past »
Pingbo Pan · Siddharth Swaroop · Alexander Immer · Runa Eschenhagen · Richard Turner · Mohammad Emtiyaz Khan -
2020 Oral: Continual Deep Learning by Functional Regularisation of Memorable Past »
Pingbo Pan · Siddharth Swaroop · Alexander Immer · Runa Eschenhagen · Richard Turner · Mohammad Emtiyaz Khan -
2020 Tutorial: (Track1) Advances in Approximate Inference »
Yingzhen Li · Cheng Zhang -
2019 Poster: Icebreaker: Element-wise Efficient Information Acquisition with a Bayesian Deep Latent Gaussian Model »
Wenbo Gong · Sebastian Tschiatschek · Sebastian Nowozin · Richard Turner · José Miguel Hernández-Lobato · Cheng Zhang -
2019 Poster: Practical Deep Learning with Bayesian Principles »
Kazuki Osawa · Siddharth Swaroop · Mohammad Emtiyaz Khan · Anirudh Jain · Runa Eschenhagen · Richard Turner · Rio Yokota -
2018 Poster: Infinite-Horizon Gaussian Processes »
Arno Solin · James Hensman · Richard Turner -
2018 Poster: Geometrically Coupled Monte Carlo Sampling »
Mark Rowland · Krzysztof Choromanski · François Chalus · Aldo Pacchiano · Tamas Sarlos · Richard Turner · Adrian Weller -
2018 Spotlight: Geometrically Coupled Monte Carlo Sampling »
Mark Rowland · Krzysztof Choromanski · François Chalus · Aldo Pacchiano · Tamas Sarlos · Richard Turner · Adrian Weller -
2017 Poster: Streaming Sparse Gaussian Process Approximations »
Thang Bui · Cuong Nguyen · Richard Turner -
2017 Poster: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning »
Shixiang (Shane) Gu · Timothy Lillicrap · Richard Turner · Zoubin Ghahramani · Bernhard Schölkopf · Sergey Levine -
2016 Poster: Rényi Divergence Variational Inference »
Yingzhen Li · Richard Turner -
2015 Poster: Neural Adaptive Sequential Monte Carlo »
Shixiang (Shane) Gu · Zoubin Ghahramani · Richard Turner -
2015 Poster: Learning Stationary Time Series using Gaussian Processes with Nonparametric Kernels »
Felipe Tobar · Thang Bui · Richard Turner -
2015 Poster: Stochastic Expectation Propagation »
Yingzhen Li · José Miguel Hernández-Lobato · Richard Turner -
2015 Spotlight: Learning Stationary Time Series using Gaussian Processes with Nonparametric Kernels »
Felipe Tobar · Thang Bui · Richard Turner -
2015 Spotlight: Stochastic Expectation Propagation »
Yingzhen Li · José Miguel Hernández-Lobato · Richard Turner -
2014 Poster: Tree-structured Gaussian Process Approximations »
Thang Bui · Richard Turner -
2014 Spotlight: Tree-structured Gaussian Process Approximations »
Thang Bui · Richard Turner -
2011 Poster: Probabilistic amplitude and frequency demodulation »
Richard Turner · Maneesh Sahani -
2011 Spotlight: Probabilistic amplitude and frequency demodulation »
Richard Turner · Maneesh Sahani -
2009 Poster: Occlusive Components Analysis »
Jörg Lücke · Richard Turner · Maneesh Sahani · Marc Henniges -
2007 Workshop: Beyond Simple Cells: Probabilistic Models for Visual Cortical Processing »
Richard Turner · Pietro Berkes · Maneesh Sahani -
2007 Poster: Modeling Natural Sounds with Modulation Cascade Processes »
Richard Turner · Maneesh Sahani -
2007 Poster: On Sparsity and Overcompleteness in Image Models »
Pietro Berkes · Richard Turner · Maneesh Sahani