Timezone: »
Risk assessment instrument (RAI) datasets, particularly ProPublica’s COMPAS dataset, are commonly used in algorithmic fairness papers due to benchmarking practices of comparing algorithms on datasets used in prior work. In many cases, this data is used as a benchmark to demonstrate good performance without ac-counting for the complexities of criminal justice (CJ) processes. However, we show that pretrial RAI datasets can contain numerous measurement biases and errors, and due to disparities in discretion and deployment, algorithmic fairness applied to RAI datasets is limited in making claims about real-world outcomes.These reasons make the datasets a poor fit for benchmarking under assumptions of ground truth and real-world impact. Furthermore, conventional practices of simply replicating previous data experiments may implicitly inherit or edify normative positions without explicitly interrogating value-laden assumptions. Without con-text of how interdisciplinary fields have engaged in CJ research and context of how RAIs operate upstream and downstream, algorithmic fairness practices are misaligned for meaningful contribution in the context of CJ, and would benefit from transparent engagement with normative considerations and values related to fairness, justice, and equality. These factors prompt questions about whether benchmarks for intrinsically socio-technical systems like the CJ system can exist in a beneficial and ethical way.
Author Information
Michelle Bao (Stanford University)
:D
Angela Zhou (Cornell University)
Samantha Zottola (Policy Research Associates)
Brian Brubach (Wellesley College)
Sarah Desmarais (Policy Research Associates, Inc.)
Aaron Horowitz (American Civil Liberties Union)
Kristian Lum (Twitter)
Suresh Venkatasubramanian (University of Utah)
More from the Same Authors
-
2021 : Stateful Offline Contextual Policy Evaluation and Learning »
Angela Zhou -
2022 : Estimating Fairness in the Absence of Ground-Truth Labels »
Michelle Bao · Jessica Dai · Keegan Hines · John Dickerson -
2022 : Panel »
Kristian Lum · Rachel Cummings · Jake Goldenfein · Sara Hooker · Joshua Loftus -
2021 : Stateful Offline Contextual Policy Evaluation and Learning »
Angela Zhou -
2021 Workshop: Machine Learning Meets Econometrics (MLECON) »
David Bruns-Smith · Arthur Gretton · Limor Gultchin · Niki Kilbertus · Krikamol Muandet · Evan Munro · Angela Zhou -
2021 Poster: Fair Clustering Under a Bounded Cost »
Seyed Esmaeili · Brian Brubach · Aravind Srinivasan · John Dickerson -
2021 Poster: Improved Guarantees for Offline Stochastic Matching via new Ordered Contention Resolution Schemes »
Brian Brubach · Nathaniel Grammel · Will Ma · Aravind Srinivasan -
2021 : It's COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks »
Michelle Bao · Angela Zhou · Samantha Zottola · Brian Brubach · Sarah Desmarais · Aaron Horowitz · Kristian Lum · Suresh Venkatasubramanian -
2020 Workshop: Consequential Decisions in Dynamic Environments »
Niki Kilbertus · Angela Zhou · Ashia Wilson · John Miller · Lily Hu · Lydia T. Liu · Nathan Kallus · Shira Mitchell -
2020 : Spotlight Talk 4: Fairness, Welfare, and Equity in Personalized Pricing »
Nathan Kallus · Angela Zhou -
2020 Poster: Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning »
Nathan Kallus · Angela Zhou -
2019 : Coffee Break and Poster Session »
Rameswar Panda · Prasanna Sattigeri · Kush Varshney · Karthikeyan Natesan Ramamurthy · Harvineet Singh · Vishwali Mhasawade · Shalmali Joshi · Laleh Seyyed-Kalantari · Matthew McDermott · Gal Yona · James Atwood · Hansa Srinivasan · Yonatan Halpern · D. Sculley · Behrouz Babaki · Margarida Carvalho · Josie Williams · Narges Razavian · Haoran Zhang · Amy Lu · Irene Y Chen · Xiaojie Mao · Angela Zhou · Nathan Kallus -
2019 : Opening Remarks »
Thorsten Joachims · Nathan Kallus · Michele Santacatterina · Adith Swaminathan · David Sontag · Angela Zhou -
2019 Workshop: “Do the right thing”: machine learning and causal inference for improved decision making »
Michele Santacatterina · Thorsten Joachims · Nathan Kallus · Adith Swaminathan · David Sontag · Angela Zhou -
2019 Poster: The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the XAUC Metric »
Nathan Kallus · Angela Zhou -
2019 Poster: Disentangling Influence: Using disentangled representations to audit model predictions »
Charles Marx · Richard Lanas Phillips · Sorelle Friedler · Carlos Scheidegger · Suresh Venkatasubramanian -
2019 Poster: Assessing Disparate Impact of Personalized Interventions: Identifiability and Bounds »
Nathan Kallus · Angela Zhou -
2018 Poster: Confounding-Robust Policy Improvement »
Nathan Kallus · Angela Zhou