Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Deep Learning

Evaluating Predictive Uncertainty and Robustness to Distributional Shift Using Real World Data

Kumud Lakara · Akshat Bhandari · Pratinav Seth · Ujjwal Verma


Abstract:

Most machine learning models operate under the assumption that the training, testing and deployment data is independent and identically distributed (i.i.d.).This assumption doesn’t generally hold true in a natural setting. Usually, the deployment data is subject to various types of distributional shifts. The magnitude of a model’s performance is proportional to this shift in the distribution of the dataset. Thus it becomes necessary to evaluate a model’s uncertainty and robustness to distributional shift to get a realistic estimate of its expected performance on real-world data. Present methods to evaluate uncertainty and model’s robustness are lacking and often fail to paint the full picture. Moreover, most analysis so far has primarily focused on classification tasks. In this paper, we propose more insightful metrics for general regression tasks using the Shifts Weather Prediction Dataset. We also present an evaluation of the baseline methods using these metrics.

Chat is not available.