Timezone: »

 
Poster
On the Expressiveness of Approximate Inference in Bayesian Neural Networks
Andrew Foong · David Burt · Yingzhen Li · Richard Turner

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1626

While Bayesian neural networks (BNNs) hold the promise of being flexible, well-calibrated statistical models, inference often requires approximations whose consequences are poorly understood. We study the quality of common variational methods in approximating the Bayesian predictive distribution. For single-hidden layer ReLU BNNs, we prove a fundamental limitation in function-space of two of the most commonly used distributions defined in weight-space: mean-field Gaussian and Monte Carlo dropout. We find there are simple cases where neither method can have substantially increased uncertainty in between well-separated regions of low uncertainty. We provide strong empirical evidence that exact inference does not have this pathology, hence it is due to the approximation and not the model. In contrast, for deep networks, we prove a universality result showing that there exist approximate posteriors in the above classes which provide flexible uncertainty estimates. However, we find empirically that pathologies of a similar form as in the single-hidden layer case can persist when performing variational inference in deeper networks. Our results motivate careful consideration of the implications of approximate inference methods in BNNs.

Author Information

Andrew Foong (University of Cambridge)

I am a PhD student in the Machine Learning Group at the University of Cambridge, supervised by Professor Richard E. Turner, and advised by Dr. José Miguel Hernández-Lobato. I started my PhD in October 2018. My research focuses on the intersection of probabilistic modelling and deep learning, with work on Bayesian neural networks, meta-learning, modelling equivariance, and PAC-Bayes.

David Burt (University of Cambridge)
Yingzhen Li (Microsoft Research Cambridge)

Yingzhen Li is a senior researcher at Microsoft Research Cambridge. She received her PhD from the University of Cambridge, and previously she has interned at Disney Research. She is passionate about building reliable machine learning systems, and her approach combines both Bayesian statistics and deep learning. Her contributions to the approximate inference field include: (1) algorithmic advances, such as variational inference with different divergences, combining variational inference with MCMC and approximate inference with implicit distributions; (2) applications of approximate inference, such as uncertainty estimation in Bayesian neural networks and algorithms to train deep generative models. She has served as area chairs at NeurIPS/ICML/ICLR/AISTATS on related research topics, and she is a co-organizer of the AABI2020 symposium, a flagship event of approximate inference.

Richard Turner (University of Cambridge)

More from the Same Authors