Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Algorithmic Fairness through the Lens of Time

Reevaluating COMPAS: Base Rate Tracking and Racial Bias

Victor Crespo · Javier Rando · Benjamin Eva · Vijay Keswani · Walter Sinnott-Armstrong

[ ]
[ Poster
 
presentation: Algorithmic Fairness through the Lens of Time
Fri 15 Dec 7 a.m. PST — 3:30 p.m. PST

Abstract:

COMPAS is a controversial Recidivism Assessment Instrument (RAI) that has been used in the US criminal justice system to predict recidivism in pretrial settings. Angwin et al. (2016) argued that COMPAS is biased against Blacks because it violates a fairness criterion known as equalized odds. However, COMPAS satisfies another two prominent fairness criteria known as weak calibration and predictive parity, which are known to be inconsistent with equalized odds in most realistic settings. Eva (2022) argues that weak calibration is not sufficient for algorithmic fairness and claims that a different criterion, base rate tracking, is at least a necessary condition.In this paper, we present four different natural ways of measuring how badly COMPAS violates base rate tracking, i.e. how much the average predicted risk scores across ethnic groups deviate from their actual recidivism prevalence. We find significant deviations in all cases and argue that advocates of base rate tracking do indeed have good reason to be concerned about racial bias in COMPAS. Our interdisciplinary work concludes by raising some further normative questions that remain unanswered by our analysis.

Chat is not available.