Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Distribution shifts: connecting methods and applications (DistShift)

The impact of domain shift on the calibration of fine-tuned models

Jay Mohta · Colin Raffel


Abstract:

Transfer learning has become a standard technique in computer vision and natural language processing thanks to the fact that it often substantially improves performance on downstream tasks. Recent work by Hendrycks et al. demonstrated that using a pre-trained model can also significantly improve a model's calibration, i.e. how well the model's confidence estimates correspond to the probability of its prediction being correct. In this paper, we provide some nuance to the claim that pre-training improves calibration by demonstrating that this beneficial effect diminishes when there is a domain shift between the pre-training and fine-tuning tasks.

Chat is not available.