Timezone: »
Deep ensembles have recently gained popularity in the deep learning community for their conceptual simplicity and efficiency. However, maintaining functional diversity between ensemble members that are independently trained with gradient descent is challenging. This can lead to pathologies when adding more ensemble members, such as a saturation of the ensemble performance, which converges to the performance of a single model. Moreover, this does not only affect the quality of its predictions, but even more so the uncertainty estimates of the ensemble, and thus its performance on out-of-distribution data. We hypothesize that this limitation can be overcome by discouraging different ensemble members from collapsing to the same function. To this end, we introduce a kernelized repulsive term in the update rule of the deep ensembles. We show that this simple modification not only enforces and maintains diversity among the members but, even more importantly, transforms the maximum a posteriori inference into proper Bayesian inference. Namely, we show that the training dynamics of our proposed repulsive ensembles follow a Wasserstein gradient flow of the KL divergence with the true posterior. We study repulsive terms in weight and function space and empirically compare their performance to standard ensembles and Bayesian baselines on synthetic and real-world prediction tasks.
Author Information
Francesco D'Angelo (Swiss Federal Institute of Technology)
Vincent Fortuin (ETH Zürich)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Repulsive Deep Ensembles are Bayesian »
Dates n/a. Room
More from the Same Authors
-
2021 : PCA Subspaces Are Not Always Optimal for Bayesian Learning »
Alexandre Bense · Amir Joudaki · Tim G. J. Rudner · Vincent Fortuin -
2021 : Deep Classifiers with Label Noise Modeling and Distance Awareness »
Vincent Fortuin · Mark Collier · Florian Wenzel · James Allingham · Jeremiah Liu · Dustin Tran · Balaji Lakshminarayanan · Jesse Berent · Rodolphe Jenatton · Effrosyni Kokiopoulou -
2021 : Pathologies in Priors and Inference for Bayesian Transformers »
Tristan Cinquin · Alexander Immer · Max Horn · Vincent Fortuin -
2021 Poster: Posterior Meta-Replay for Continual Learning »
Christian Henning · Maria Cervera · Francesco D'Angelo · Johannes von Oswald · Regina Traber · Benjamin Ehret · Seijin Kobayashi · Benjamin F. Grewe · João Sacramento