Timezone: »
Bayesian methods promise to fix many shortcomings of deep learning, but they are impractical and rarely match the performance of standard methods, let alone improve them. In this paper, we demonstrate practical training of deep networks with natural-gradient variational inference. By applying techniques such as batch normalisation, data augmentation, and distributed training, we achieve similar performance in about the same number of epochs as the Adam optimiser, even on large datasets such as ImageNet. Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted. This work enables practical deep learning while preserving benefits of Bayesian principles. A PyTorch implementation is available as a plug-and-play optimiser.
Author Information
Kazuki Osawa (Tokyo Institute of Technology)
Siddharth Swaroop (University of Cambridge)
Emtiyaz Khan (RIKEN)
Emtiyaz Khan (also known as Emti) is a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference Team. He is also a visiting professor at the Tokyo University of Agriculture and Technology (TUAT). Previously, he was a postdoc and then a scientist at Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also taught two large machine learning courses and received a teaching award. He finished his PhD in machine learning from University of British Columbia in 2012. The main goal of Emti’s research is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings. For the past 10 years, his work has focused on developing Bayesian methods that could lead to such fundamental principles. The approximate Bayesian inference team now continues to use these principles, as well as derive new ones, to solve real-world problems.
Anirudh Jain (Indian Institute of Technology (ISM), Dhanbad)
Runa Eschenhagen (University of Osnabrueck)
Richard Turner (University of Cambridge)
Rio Yokota (Tokyo Institute of Technology, AIST- Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory (RWBC- OIL), National Institute of Advanced Industrial Science and Technology (AIST))
Rio Yokota received his BS, MS, and PhD from Keio University in 2003, 2005, and 2009, respectively. He is currently an Associate Professor at GSIC, Tokyo Institute of Technology. His research interests range from high performance computing, hierarchical low-rank approximation methods, and scalable deep learning. He was part of the team that won the ACM Gordon Bell prize for price/performance in 2009.
More from the Same Authors
-
2020 Poster: Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks »
Ryo Karakida · Kazuki Osawa -
2020 Oral: Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks »
Ryo Karakida · Kazuki Osawa -
2020 Poster: Efficient Low Rank Gaussian Variational Inference for Neural Networks »
Marcin Tomczak · Siddharth Swaroop · Richard Turner -
2020 Poster: Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes »
Andrew Foong · Wessel Bruinsma · Jonathan Gordon · Yann Dubois · James Requeima · Richard Turner -
2020 Poster: On the Expressiveness of Approximate Inference in Bayesian Neural Networks »
Andrew Foong · David Burt · Yingzhen Li · Richard Turner -
2020 Poster: VAEM: a Deep Generative Model for Heterogeneous Mixed Type Data »
Chao Ma · Sebastian Tschiatschek · Richard Turner · José Miguel Hernández-Lobato · Cheng Zhang -
2020 Poster: Continual Deep Learning by Functional Regularisation of Memorable Past »
Pingbo Pan · Siddharth Swaroop · Alexander Immer · Runa Eschenhagen · Richard Turner · Mohammad Emtiyaz Khan -
2020 Oral: Continual Deep Learning by Functional Regularisation of Memorable Past »
Pingbo Pan · Siddharth Swaroop · Alexander Immer · Runa Eschenhagen · Richard Turner · Mohammad Emtiyaz Khan -
2019 Poster: Icebreaker: Element-wise Efficient Information Acquisition with a Bayesian Deep Latent Gaussian Model »
Wenbo Gong · Sebastian Tschiatschek · Sebastian Nowozin · Richard Turner · José Miguel Hernández-Lobato · Cheng Zhang -
2019 Poster: Approximate Inference Turns Deep Networks into Gaussian Processes »
Mohammad Emtiyaz Khan · Alexander Immer · Ehsan Abedi · Maciej Korzepa -
2019 Tutorial: Deep Learning with Bayesian Principles »
Mohammad Emtiyaz Khan -
2018 Poster: Infinite-Horizon Gaussian Processes »
Arno Solin · James Hensman · Richard Turner -
2018 Poster: Geometrically Coupled Monte Carlo Sampling »
Mark Rowland · Krzysztof Choromanski · François Chalus · Aldo Pacchiano · Tamas Sarlos · Richard Turner · Adrian Weller -
2018 Spotlight: Geometrically Coupled Monte Carlo Sampling »
Mark Rowland · Krzysztof Choromanski · François Chalus · Aldo Pacchiano · Tamas Sarlos · Richard Turner · Adrian Weller -
2017 Poster: Streaming Sparse Gaussian Process Approximations »
Thang Bui · Cuong Nguyen · Richard Turner -
2017 Poster: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning »
Shixiang (Shane) Gu · Timothy Lillicrap · Richard Turner · Zoubin Ghahramani · Bernhard Schölkopf · Sergey Levine -
2016 Poster: Rényi Divergence Variational Inference »
Yingzhen Li · Richard Turner -
2015 Poster: Neural Adaptive Sequential Monte Carlo »
Shixiang (Shane) Gu · Zoubin Ghahramani · Richard Turner -
2015 Poster: Kullback-Leibler Proximal Variational Inference »
Mohammad Emtiyaz Khan · Pierre Baque · François Fleuret · Pascal Fua -
2015 Poster: Learning Stationary Time Series using Gaussian Processes with Nonparametric Kernels »
Felipe Tobar · Thang Bui · Richard Turner -
2015 Poster: Stochastic Expectation Propagation »
Yingzhen Li · José Miguel Hernández-Lobato · Richard Turner -
2015 Spotlight: Learning Stationary Time Series using Gaussian Processes with Nonparametric Kernels »
Felipe Tobar · Thang Bui · Richard Turner -
2015 Spotlight: Stochastic Expectation Propagation »
Yingzhen Li · José Miguel Hernández-Lobato · Richard Turner -
2014 Poster: Decoupled Variational Gaussian Inference »
Mohammad Emtiyaz Khan -
2014 Poster: Tree-structured Gaussian Process Approximations »
Thang Bui · Richard Turner -
2014 Spotlight: Tree-structured Gaussian Process Approximations »
Thang Bui · Richard Turner -
2012 Poster: Fast Bayesian Inference for Non-Conjugate Gaussian Process Regression »
Mohammad Emtiyaz Khan · Shakir Mohamed · Kevin P Murphy -
2011 Poster: Probabilistic amplitude and frequency demodulation »
Richard Turner · Maneesh Sahani -
2011 Spotlight: Probabilistic amplitude and frequency demodulation »
Richard Turner · Maneesh Sahani -
2010 Poster: Variational bounds for mixed-data factor analysis »
Mohammad Emtiyaz Khan · Benjamin Marlin · Guillaume Bouchard · Kevin P Murphy -
2009 Oral: Accelerating Bayesian Structural Inference for Non-Decomposable Gaussian Graphical Models »
Baback Moghaddam · Benjamin Marlin · Mohammad Emtiyaz Khan · Kevin P Murphy -
2009 Poster: Occlusive Components Analysis »
Jörg Lücke · Richard Turner · Maneesh Sahani · Marc Henniges -
2009 Poster: Accelerating Bayesian Structural Inference for Non-Decomposable Gaussian Graphical Models »
Baback Moghaddam · Benjamin Marlin · Mohammad Emtiyaz Khan · Kevin P Murphy -
2007 Workshop: Beyond Simple Cells: Probabilistic Models for Visual Cortical Processing »
Richard Turner · Pietro Berkes · Maneesh Sahani -
2007 Poster: Modeling Natural Sounds with Modulation Cascade Processes »
Richard Turner · Maneesh Sahani -
2007 Poster: On Sparsity and Overcompleteness in Image Models »
Pietro Berkes · Richard Turner · Maneesh Sahani