Skip to yearly menu bar Skip to main content


Poster

Credal Deep Ensembles for Uncertainty Quantification

Kaizheng Wang · Fabio Cuzzolin · Shireen Kudukkil Manchingal - · Keivan Shariatmadar · David Moens · Hans Hallez

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

This paper introduces an innovative approach to classification called Credal Deep Ensembles (CreDEs), namely, ensembles of novel Credal-Set Neural Networks (CreNets). CreNets are trained to predict a lower and an upper probability bound for each class, which, in turn, determine a convex set of probabilities (credal set) on the class set. The training employs a loss inspired by distributionally robust optimization which simulates the potential divergence of the test distribution from the training distribution, in such a way that the width of the predicted probability interval reflects the 'epistemic' uncertainty about the future data distribution. Ensembles can be constructed by training multiple CreNets, each associated with a different random seed, and averaging the outputted intervals.Extensive experiments are conducted on various out-of-distributions (OOD) detection benchmarks (CIFAR10/100 vs SVHN/Tiny-ImageNet, CIFAR10 vs CIFAR10-C, ImageNet vs ImageNet-O) and using different network architectures (ResNet50, VGG16, and ViT Base). Compared to Deep Ensembles baselines, CreDEs demonstrate higher test accuracy, lower expected calibration error, and significantly improved epistemic uncertainty estimation.

Live content is unavailable. Log in and register to view live content