`

Timezone: »

 
Evaluating Approximate Inference in Bayesian Deep Learning + Q&A
Andrew Gordon Wilson · Pavel Izmailov · Matthew Hoffman · Yarin Gal · Yingzhen Li · Melanie F. Pradier · Sharad Vikram · Andrew Foong · Sanae Lotfi · Sebastian Farquhar

Thu Dec 09 10:05 AM -- 10:25 AM (PST) @
Event URL: https://izmailovpavel.github.io/neurips_bdl_competition/ »

Understanding the fidelity of approximate inference has extraordinary value beyond the standard approach of measuring generalization on a particular task: if approximate inference is working correctly, then we can expect more reliable and accurate deployment across any number of real-world settings. In this regular competition, we invite the community to evaluate the fidelity of approximate Bayesian inference procedures in deep learning, using as a reference Hamiltonian Monte Carlo (HMC) samples obtained by parallelizing computations over hundreds of tensor processing unit (TPU) devices. We consider a variety of tasks, including image recognition, regression, covariate shift, and medical applications, such as diagnosing diabetic retinopathy. All data are publicly available, and we will release several baselines, including stochastic MCMC, variational methods, and deep ensembles.

Author Information

Andrew Gordon Wilson (New York University)
Pavel Izmailov (New York University)
Matthew Hoffman (Google)
Yarin Gal (University of Oxford)
Yingzhen Li (Imperial College London)

Yingzhen Li is a senior researcher at Microsoft Research Cambridge. She received her PhD from the University of Cambridge, and previously she has interned at Disney Research. She is passionate about building reliable machine learning systems, and her approach combines both Bayesian statistics and deep learning. Her contributions to the approximate inference field include: (1) algorithmic advances, such as variational inference with different divergences, combining variational inference with MCMC and approximate inference with implicit distributions; (2) applications of approximate inference, such as uncertainty estimation in Bayesian neural networks and algorithms to train deep generative models. She has served as area chairs at NeurIPS/ICML/ICLR/AISTATS on related research topics, and she is a co-organizer of the AABI2020 symposium, a flagship event of approximate inference.

Melanie F. Pradier (Microsoft Research)
Sharad Vikram (Google)
Andrew Foong (University of Cambridge)

I am a PhD student in the Machine Learning Group at the University of Cambridge, supervised by Professor Richard E. Turner, and advised by Dr. José Miguel Hernández-Lobato. I started my PhD in October 2018. My research focuses on the intersection of probabilistic modelling and deep learning, with work on Bayesian neural networks, meta-learning, modelling equivariance, and PAC-Bayes.

Sanae Lotfi (New York University)
Sebastian Farquhar (University of Oxford)

More from the Same Authors