Timezone: »
Given an algorithmic predictor that is "fair"' on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound? In this paper, we study the transferability of statistical group fairness for machine learning predictors (i.e., classifiers or regressors subject to bounded distribution shift. Such shifts may be introduced by initial training data uncertainties, user adaptation to a deployed predictor, dynamic environments, or the use of pre-trained models in new settings. Herein, we develop a bound that characterizes such transferability, flagging potentially inappropriate deployments of machine learning for socially consequential tasks. We first develop a framework for bounding violations of statistical fairness subject to distribution shift, formulating a generic upper bound for transferred fairness violations as our primary result. We then develop bounds for specific worked examples, focusing on two commonly used fairness definitions (i.e., demographic parity and equalized odds) and two classes of distribution shift (i.e., covariate shift and label shift). Finally, we compare our theoretical bounds to deterministic models of distribution shift and against real-world data, finding that we are able to estimate fairness violation bounds in practice, even when simplifying assumptions are only approximately satisfied.
Author Information
Yatong Chen (UC Santa Cruz, Google Brain)
Reilly Raab (UC Santa Cruz)
My current research involves the dynamics of multiagent systems and the alignment of local incentives with global objectives. My background is in physics, with experience in scientific computing, signal processing, and electronics. I spent a few years between undergrad and grad school backpacking abroad, remotely developing software related to automated circuit design.
Jialu Wang (University of California, Santa Cruz)
Yang Liu (UC Santa Cruz)
More from the Same Authors
-
2021 Spotlight: Unintended Selection: Persistent Qualification Rate Disparities and Interventions »
Reilly Raab · Yang Liu -
2021 : Unfairness Despite Awareness: Group-Fair Classification with Strategic Agents »
Andrew Estornell · Sanmay Das · Yang Liu · Yevgeniy Vorobeychik -
2021 : Unfairness Despite Awareness: Group-Fair Classification with Strategic Agents »
Andrew Estornell · Sanmay Das · Yang Liu · Yevgeniy Vorobeychik -
2022 : Tier Balancing: Towards Dynamic Fairness over Underlying Causal Factors »
Zeyu Tang · Yatong Chen · Yang Liu · Kun Zhang -
2022 : Fishy: Layerwise Fisher Approximation for Higher-order Neural Network Optimization »
Abel Peirson · Ehsan Amid · Yatong Chen · Vladimir Feinberg · Manfred Warmuth · Rohan Anil -
2022 : Fast Implicit Constrained Optimization of Non-decomposable Objectives for Deep Networks »
Yatong Chen · Abhishek Kumar · Yang Liu · Ehsan Amid -
2022 Spotlight: Certifying Some Distributional Fairness with Subpopulation Decomposition »
Mintong Kang · Linyi Li · Maurice Weber · Yang Liu · Ce Zhang · Bo Li -
2022 Poster: Certifying Some Distributional Fairness with Subpopulation Decomposition »
Mintong Kang · Linyi Li · Maurice Weber · Yang Liu · Ce Zhang · Bo Li -
2022 Poster: Adaptive Data Debiasing through Bounded Exploration »
Yifan Yang · Yang Liu · Parinaz Naghizadeh -
2021 : Revisiting Dynamics in Strategic ML »
Yang Liu -
2021 : Bounded Fairness Transferability subject to Distribution Shift »
Reilly Raab · Yatong Chen · Yang Liu -
2021 Poster: Unintended Selection: Persistent Qualification Rate Disparities and Interventions »
Reilly Raab · Yang Liu -
2021 Poster: Can Less be More? When Increasing-to-Balancing Label Noise Rates Considered Beneficial »
Yang Liu · Jialu Wang -
2021 Poster: Policy Learning Using Weak Supervision »
Jingkang Wang · Hongyi Guo · Zhaowei Zhu · Yang Liu -
2021 Poster: Bandit Learning with Delayed Impact of Actions »
Wei Tang · Chien-Ju Ho · Yang Liu -
2020 : Contributed Talk 4: Strategic Recourse in Linear Classification »
Yatong Chen · Yang Liu -
2020 Poster: Learning Strategy-Aware Linear Classifiers »
Yiling Chen · Yang Liu · Chara Podimata -
2020 Poster: How do fair decisions fare in long-term qualification? »
Xueru Zhang · Ruibo Tu · Yang Liu · Mingyan Liu · Hedvig Kjellstrom · Kun Zhang · Cheng Zhang -
2020 Poster: Optimal Query Complexity of Secure Stochastic Convex Optimization »
Wei Tang · Chien-Ju Ho · Yang Liu