Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Algorithmic Fairness through the Lens of Time

Exploring Predictive Arbitrariness as Unfairness via Predictive Multiplicity and Predictive Churn

Jamelle Watson-Daniels · Lance Strait · Mehadi Hassen · Amy Skerry-Ryan · Alexander D'Amour

[ ]
Fri 15 Dec 11 a.m. PST — 11:03 a.m. PST
 
presentation: Algorithmic Fairness through the Lens of Time
Fri 15 Dec 7 a.m. PST — 3:30 p.m. PST

Abstract:

For models to be fair, predictions should not be arbitrary. Predictions can be considered arbitrary if small perturbations in the training data or model specification result in changed decisions for some individuals.In this context, predictive multiplicity, or predictive variation over a set of near-optimal models, has been proposed as a key measure of arbitrariness.Separate from fairness research, another type of predictive inconsistency arises in the context of models that are continuously updated with new data.In this setting, the instability metric is predictive churn: expected prediction flips over two models trained consecutively. Interestingly, these streams of research and measures of predictive inconsistency have been studied largely independently, although sometimes conflated. In this paper, we review these notions and study their similarities and differences on real datasets. We find that they do in fact measure distinct notions of arbitrariness, that they are not immediately mitigated by using uncertainty-aware prediction methods, and that they both exhibit strong dependence on both data and model specification.

Chat is not available.