Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distribution Shifts: Connecting Methods and Applications

Engineering Uncertainty Representations to Monitor Distribution Shifts

Thomas Bonnier · Benjamin Bosch


Abstract:

In some classification tasks, the true label is not known until months or even years after the classifier prediction time. Once the model has been deployed, harmful dataset shift regimes can surface. Without cautious model monitoring, the damage could prove to be irreversible when true labels unfold. In this paper, we propose a method for practitioners to monitor distribution shifts on unlabeled data. We leverage two representations for quantifying and visualizing model uncertainty. The Adversarial Neighborhood Analysis assesses model uncertainty by aggregating predictions in the neighborhood of a data point and comparing them to the prediction at the single point. The Non-Conformity Analysis exploits the results of conformal prediction and leverages a decision tree to display uncertain zones. We empirically test our approach over scenarios of synthetically generated shifts to prove its efficacy.

Chat is not available.