Timezone: »
We study the \emph{transferability of fair predictors} (i.e., classifiers or regressors) assuming domain adaptation. Given a predictor that is “fair” on some \emph{source} distribution (of features and labels), is it still fair on a \emph{realized} distribution that differs? We first generalize common notions of static, statistical group-level fairness to a family of premetric functions that measure “induced disparity.” We quantify domain adaptation by bounding group-specific statistical divergences between the source and realized distributions. Next, we explore cases of simplifying assumptions for which bounds on domain adaptation imply bounds on changes to induced disparity. We provide worked examples for two commonly used fairness definitions (i.e., demographic parity and equalized odds) and models of domain adaptation (i.e., covariate shift and label shift) that prove to be special cases of our general method. Finally, we validate our theoretical results with synthetic data.
Author Information
Reilly Raab (UC Santa Cruz)
My current research involves the dynamics of multiagent systems and the alignment of local incentives with global objectives. My background is in physics, with experience in scientific computing, signal processing, and electronics. I spent a few years between undergrad and grad school backpacking abroad, remotely developing software related to automated circuit design.
Yatong Chen (UC Santa Cruz)
Yang Liu (UC Santa Cruz)
More from the Same Authors
-
2021 Spotlight: Unintended Selection: Persistent Qualification Rate Disparities and Interventions »
Reilly Raab · Yang Liu -
2021 : Unfairness Despite Awareness: Group-Fair Classification with Strategic Agents »
Andrew Estornell · Sanmay Das · Yang Liu · Yevgeniy Vorobeychik -
2021 : Unfairness Despite Awareness: Group-Fair Classification with Strategic Agents »
Andrew Estornell · Sanmay Das · Yang Liu · Yevgeniy Vorobeychik -
2022 : Tier Balancing: Towards Dynamic Fairness over Underlying Causal Factors »
Zeyu Tang · Yatong Chen · Yang Liu · Kun Zhang -
2022 : Fishy: Layerwise Fisher Approximation for Higher-order Neural Network Optimization »
Abel Peirson · Ehsan Amid · Yatong Chen · Vladimir Feinberg · Manfred Warmuth · Rohan Anil -
2022 : Fast Implicit Constrained Optimization of Non-decomposable Objectives for Deep Networks »
Yatong Chen · Abhishek Kumar · Yang Liu · Ehsan Amid -
2022 Spotlight: Certifying Some Distributional Fairness with Subpopulation Decomposition »
Mintong Kang · Linyi Li · Maurice Weber · Yang Liu · Ce Zhang · Bo Li -
2022 Poster: Fairness Transferability Subject to Bounded Distribution Shift »
Yatong Chen · Reilly Raab · Jialu Wang · Yang Liu -
2022 Poster: Certifying Some Distributional Fairness with Subpopulation Decomposition »
Mintong Kang · Linyi Li · Maurice Weber · Yang Liu · Ce Zhang · Bo Li -
2022 Poster: Adaptive Data Debiasing through Bounded Exploration »
Yifan Yang · Yang Liu · Parinaz Naghizadeh -
2021 : Revisiting Dynamics in Strategic ML »
Yang Liu -
2021 Poster: Unintended Selection: Persistent Qualification Rate Disparities and Interventions »
Reilly Raab · Yang Liu -
2021 Poster: Can Less be More? When Increasing-to-Balancing Label Noise Rates Considered Beneficial »
Yang Liu · Jialu Wang -
2021 Poster: Policy Learning Using Weak Supervision »
Jingkang Wang · Hongyi Guo · Zhaowei Zhu · Yang Liu -
2021 Poster: Bandit Learning with Delayed Impact of Actions »
Wei Tang · Chien-Ju Ho · Yang Liu -
2020 : Contributed Talk 4: Strategic Recourse in Linear Classification »
Yatong Chen · Yang Liu -
2020 Poster: Learning Strategy-Aware Linear Classifiers »
Yiling Chen · Yang Liu · Chara Podimata -
2020 Poster: How do fair decisions fare in long-term qualification? »
Xueru Zhang · Ruibo Tu · Yang Liu · Mingyan Liu · Hedvig Kjellstrom · Kun Zhang · Cheng Zhang -
2020 Poster: Optimal Query Complexity of Secure Stochastic Convex Optimization »
Wei Tang · Chien-Ju Ho · Yang Liu