Timezone: »

Training Uncertainty-Aware Classifiers with Conformalized Deep Learning
Bat-Sheva Einbinder · Yaniv Romano · Matteo Sesia · Yanfei Zhou

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #113

Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty and estimate reliable probabilities. In particular, they tend to be overconfident. We begin to address this problem in the context of multi-class classification by developing a novel training algorithm producing models with more dependable uncertainty estimates, without sacrificing predictive power. The idea is to mitigate overconfidence by minimizing a loss function, inspired by advances in conformal inference, that quantifies model uncertainty by carefully leveraging hold-out data. Experiments with synthetic and real data demonstrate this method can lead to smaller conformal prediction sets with higher conditional coverage, after exact calibration with hold-out data, compared to state-of-the-art alternatives.

Author Information

Bat-Sheva Einbinder (Technion - Israel Institute of Technology, Technion - Israel Institute of Technology)
Yaniv Romano (Technion---Israel Institute of Technology)
Matteo Sesia (University of Southern California)

Matteo Sesia is an assistant professor in the Department of Data Sciences and Operations, at the University of Southern California, Marshall School of Business.

Yanfei Zhou (University of Southern California)

More from the Same Authors