Timezone: »
Calibration can reduce overconfident predictions of deep neural networks, but can calibration also accelerate training? In this paper, we show that it can when used to prioritize some examples for performing subset selection. We study the effect of popular calibration techniques in selecting better subsets of samples during training (also called sample prioritization) and observe that calibration can improve the quality of subsets, reduce the number of examples per epoch (by at least 70%), and can thereby speed up the overall training process. We further study the effect of using calibrated pre-trained models coupled with calibration during training to guide sample prioritization, which again seems to improve the quality of samples selected.
Author Information
Ganesh Tata (University of Alberta)
I am a second year Master's student at the University of Alberta. I am currently pursuing my thesis under. Prof. Nilanjan ray on Optical Character Recognition (OCR) and data subset selection.
Gautham Krishna Gudur (Ericsson)

I am a Data Scientist at Ericsson R&D in the Global AI Accelerator (GAIA) team working on machine intelligence and telecom. I also do independent research with a broad research theme of resource-efficient deep learning (accelerating neural network training, human-in-the-loop learning, etc.). Previously, I worked at SmartCardia - an AI-assisted wearable healthcare spin-off from EPFL.
Gopinath Chennupati (Amazon)
Mohammad Emtiyaz Khan (RIKEN, Tokyo)
More from the Same Authors
-
2022 : Can Calibration Improve Sample Prioritization? »
Ganesh Tata · Gautham Krishna Gudur · Gopinath Chennupati · Mohammad Emtiyaz Khan -
2022 : Invited Keynote 2 »
Mohammad Emtiyaz Khan · Mohammad Emtiyaz Khan -
2021 : Panel »
Mohammad Emtiyaz Khan · Atoosa Kasirzadeh · Anna Rogers · Javier González · Suresh Venkatasubramanian · Robert Williamson -
2020 Poster: Continual Deep Learning by Functional Regularisation of Memorable Past »
Pingbo Pan · Siddharth Swaroop · Alexander Immer · Runa Eschenhagen · Richard Turner · Mohammad Emtiyaz Khan -
2020 Oral: Continual Deep Learning by Functional Regularisation of Memorable Past »
Pingbo Pan · Siddharth Swaroop · Alexander Immer · Runa Eschenhagen · Richard Turner · Mohammad Emtiyaz Khan -
2018 Poster: SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient »
Aaron Mishkin · Frederik Kunstner · Didrik Nielsen · Mark Schmidt · Mohammad Emtiyaz Khan