Timezone: »
Neural networks are known to produce poor uncertainty estimations, and a variety of approaches have been proposed to remedy this issue. This includes deep ensemble, a simple and effective method that achieves state-of-the-art results for uncertainty-aware learning tasks. In this work, we explore a combinatorial generalization of deep ensemble called deep combinatorial aggregation (DCA). DCA creates multiple instances of network components and aggregates their combinations to produce diversified model proposals and predictions. DCA components can be defined at different levels of granularity. And we discovered that coarse-grain DCAs can outperform deep ensemble for uncertainty-aware learning both in terms of predictive performance and uncertainty estimation. For fine-grain DCAs, we discover that an average parameterization approach named deep combinatorial weight averaging (DCWA) can improve the baseline training. It is on par with stochastic weight averaging (SWA) but does not require any custom training schedule or adaptation of BatchNorm layers. Furthermore, we propose a consistency enforcing loss that helps the training of DCWA and modelwise DCA. We experiment on in-domain, distributional shift, and out-of-distribution image classification tasks, and empirically confirm the effectiveness of DCWA and DCA approaches.
Author Information
Yuesong Shen (Technical University of Munich)
Daniel Cremers (Technical University of Munich)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Deep Combinatorial Aggregation »
Dates n/a. Room
More from the Same Authors
-
2021 : STEP: Segmenting and Tracking Every Pixel »
Mark Weber · Jun Xie · Maxwell Collins · Yukun Zhu · Paul Voigtlaender · Hartwig Adam · Bradley Green · Andreas Geiger · Bastian Leibe · Daniel Cremers · Aljosa Osep · Laura Leal-TaixĂ© · Liang-Chieh Chen -
2022 Poster: What Makes Graph Neural Networks Miscalibrated? »
Hans Hao-Hsun Hsu · Yuesong Shen · Christian Tomani · Daniel Cremers -
2022 : A Graph Is More Than Its Nodes: Towards Structured Uncertainty-Aware Learning on Graphs »
Hans Hao-Hsun Hsu · Yuesong Shen · Daniel Cremers -
2022 Spotlight: Lightning Talks 3B-1 »
Tianying Ji · Tongda Xu · Giulia Denevi · Aibek Alanov · Martin Wistuba · Wei Zhang · Yuesong Shen · Massimiliano Pontil · Vadim Titov · Yan Wang · Yu Luo · Daniel Cremers · Yanjun Han · Arlind Kadra · Dailan He · Josif Grabocka · Zhengyuan Zhou · Fuchun Sun · Carlo Ciliberto · Dmitry Vetrov · Mingxuan Jing · Chenjian Gao · Aaron Flores · Tsachy Weissman · Han Gao · Fengxiang He · Kunzan Liu · Wenbing Huang · Hongwei Qin -
2022 Spotlight: What Makes Graph Neural Networks Miscalibrated? »
Hans Hao-Hsun Hsu · Yuesong Shen · Christian Tomani · Daniel Cremers -
2022 Spotlight: Lightning Talks 1B-1 »
Qitian Wu · Runlin Lei · Rongqin Chen · Luca Pinchetti · Yangze Zhou · Abhinav Kumar · Hans Hao-Hsun Hsu · Wentao Zhao · Chenhao Tan · Zhen Wang · Shenghui Zhang · Yuesong Shen · Tommaso Salvatori · Gitta Kutyniok · Zenan Li · Amit Sharma · Leong Hou U · Yordan Yordanov · Christian Tomani · Bruno Ribeiro · Yaliang Li · David P Wipf · Daniel Cremers · Bolin Ding · Beren Millidge · Ye Li · Yuhang Song · Junchi Yan · Zhewei Wei · Thomas Lukasiewicz -
2021 Poster: Sparse Quadratic Optimisation over the Stiefel Manifold with Application to Permutation Synchronisation »
Florian Bernard · Daniel Cremers · Johan Thunberg -
2020 Poster: Deep Shells: Unsupervised Shape Correspondence with Optimal Transport »
Marvin Eisenberger · Aysim Toker · Laura Leal-TaixĂ© · Daniel Cremers -
2016 Poster: Protein contact prediction from amino acid co-evolution using convolutional networks for graph-valued images »
Vladimir Golkov · Marcin Skwark · Antonij Golkov · Alexey Dosovitskiy · Thomas Brox · Jens Meiler · Daniel Cremers -
2016 Oral: Protein contact prediction from amino acid co-evolution using convolutional networks for graph-valued images »
Vladimir Golkov · Marcin Skwark · Antonij Golkov · Alexey Dosovitskiy · Thomas Brox · Jens Meiler · Daniel Cremers