Timezone: »
Identifying unfamiliar inputs, also known as out-of-distribution (OOD) detection, is a crucial property of any decision making process. A simple and empirically validated technique is based on deep ensembles where the variance of predictions over different neural networks acts as a substitute for input uncertainty. Nevertheless, a theoretical understanding of the inductive biases leading to the performance of deep ensemble's uncertainty estimation is missing. To improve our description of their behavior, we study deep ensembles with large layer widths operating in simplified linear training regimes, in which the functions trained with gradient descent can be described by the neural tangent kernel. We identify two sources of noise, each inducing a distinct inductive bias in the predictive variance at initialization. We further show theoretically and empirically that both noise sources affect the predictive variance of non-linear deep ensembles in toy models and realistic settings after training. Finally, we propose practical ways to eliminate part of these noise sources leading to significant changes and improved OOD detection in trained deep ensembles.
Author Information
Seijin Kobayashi (ETHZ)
Pau Vilimelis Aceituno (Insititute of Neuroinformatics, University of Zurich and ETH Zurich, Swiss Federal Institute of Technology)
Johannes von Oswald (ETH Zurich)
More from the Same Authors
-
2021 Spotlight: Credit Assignment in Neural Networks through Deep Feedback Control »
Alexander Meulemans · Matilde Tristany Farinha · Javier Garcia Ordonez · Pau Vilimelis Aceituno · João Sacramento · Benjamin F. Grewe -
2022 : Random initialisations performing above chance and how to find them »
Frederik Benzing · Simon Schug · Robert Meier · Johannes von Oswald · Yassir Akram · Nicolas Zucchet · Laurence Aitchison · Angelika Steger -
2022 : Meta-Learning via Classifier(-free) Guidance »
Elvis Nava · Seijin Kobayashi · Yifei Yin · Robert Katzschmann · Benjamin F. Grewe -
2022 Poster: A contrastive rule for meta-learning »
Nicolas Zucchet · Simon Schug · Johannes von Oswald · Dominic Zhao · João Sacramento -
2022 Poster: The least-control principle for local learning at equilibrium »
Alexander Meulemans · Nicolas Zucchet · Seijin Kobayashi · Johannes von Oswald · João Sacramento -
2021 Poster: Credit Assignment in Neural Networks through Deep Feedback Control »
Alexander Meulemans · Matilde Tristany Farinha · Javier Garcia Ordonez · Pau Vilimelis Aceituno · João Sacramento · Benjamin F. Grewe -
2021 Poster: Posterior Meta-Replay for Continual Learning »
Christian Henning · Maria Cervera · Francesco D'Angelo · Johannes von Oswald · Regina Traber · Benjamin Ehret · Seijin Kobayashi · Benjamin F. Grewe · João Sacramento -
2021 Poster: Learning where to learn: Gradient sparsity in meta and continual learning »
Johannes von Oswald · Dominic Zhao · Seijin Kobayashi · Simon Schug · Massimo Caccia · Nicolas Zucchet · João Sacramento