Timezone: »
Recent interests in causality for fair decision-making systems has been accompanied with great skepticism due to practical and epistemological challenges with applying existing causal fairness approaches. Existing works mainly seek to remove the causal effect of social categories such as race or gender along problematic pathways of an underlying DAG model. However, in practice DAG models are often unknown. Further, a single entity may not be held responsible for the discrimination along an entire causal pathway. Building on the “potential outcomes framework,” this paper aims to lay out the necessary conditions for proper application of causal fairness. To this end, we propose a shift from postulating interventions on immutable social categories to their perceptions and highlight two key aspects of interventions that are largely overlooked in the causal fairness literature: timing and nature of manipulations. We argue that such conceptualization is key in evaluating the validity of causal assumptions and conducting sound causal analysis including avoiding post-treatment bias. Additionally, choosing the timing of the intervention properly allows us to conduct fairness analyses at different points in a decision-making process. Our framework also addresses the limitations of fairness metrics that depend on statistical correlations. Specifically, we introduce causal variants of common statistical fairness notions and make a novel observation that under the causal framework there is no fundamental disagreement between different criteria. Finally, we conduct extensive experiments on synthetic and real-world datasets including a case study on police stop and search decisions and demonstrate the efficacy of our framework in evaluating and mitigating unfairness at various decision points.
Author Information
Aida Rahmattalabi (University of Southern California)
Alice Xiang (Sony AI)
More from the Same Authors
-
2022 Expo Talk Panel: Challenges & Opportunities for Ethical AI in Practice »
Alice Xiang -
2021 : Q/A Session »
Alice Xiang · Jacob Andreas -
2021 : Speaker Introduction »
Alice Xiang -
2021 Workshop: eXplainable AI approaches for debugging and diagnosis »
Roberto Capobianco · Biagio La Rosa · Leilani Gilpin · Wen Sun · Alice Xiang · Alexander Feldman -
2019 Poster: Exploring Algorithmic Fairness in Robust Graph Covering Problems »
Aida Rahmattalabi · Phebe Vayanos · Anthony Fulginiti · Eric Rice · Bryan Wilder · Amulya Yadav · Milind Tambe