Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization

Improving Deep Ensembles without Communication

Konstantinos Pitas · Michael Arbel · Julyan Arbel


Abstract:

Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised deep learning. We propose to improve deep ensembles by optimizing a tighter PAC-Bayesian bound than the most popular ones. Our approach has a number of benefits over previous methods: 1) it requires no communication between ensemble members during training to improve performance and is trivially parallelizable, 2) it results in a simple soft thresholding gradient update that is much simpler than alternatives. Empirically, we outperform competing approaches that try to improve ensembles by encouraging diversity. We report test accuracy gains for MLP, LeNet, and WideResNet architectures, and for a variety of datasets.

Chat is not available.