Skip to yearly menu bar Skip to main content

Spotlight Poster

Auditing Fairness by Betting

Ben Chugg · Santiago Cortes-Gomez · Bryan Wilder · Aaditya Ramdas

Great Hall & Hall B1+B2 (level 1) #1520
[ ]
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST


We provide practical, efficient, and nonparametric methods for auditing the fairness of deployed classification and regression models. Whereas previous work relies on a fixed-sample size, our methods are sequential and allow for the continuous monitoring of incoming data, making them highly amenable to tracking the fairness of real-world systems. We also allow the data to be collected by a probabilistic policy as opposed to sampled uniformly from the population. This enables auditing to be conducted on data gathered for another purpose. Moreover, this policy may change over time and different policies may be used on different subpopulations. Finally, our methods can handle distribution shift resulting from either changes to the model or changes in the underlying population. Our approach is based on recent progress in anytime-valid inference and game-theoretic statistics---the ``testing by betting'' framework in particular. These connections ensure that our methods are interpretable, fast, and easy to implement. We demonstrate the efficacy of our approach on three benchmark fairness datasets.

Chat is not available.