Skip to yearly menu bar Skip to main content


Poster

The Fragility of Fairness: Causal Sensitivity Analysis for Fair Machine Learning

Jake Fawkes · Nic Fishman · Mel Andrews · Zachary Lipton

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Fairness metrics are a core tool in the fair machine learning literature (FairML), used to determine that ML models are, in some sense, “fair.” Real world data, however, is typically plagued by a variety of measurement biases and other violated assumptions which can render fairness assessments meaningless. We adapt tools from causal sensitivity analysis to the FairML context, providing a general framework which (1) accommodates effectively any combination of fairness metric and bias which can be posed in the ``oblivious setting''; (2) allows researchers to investigate combinations of biases, resulting in non-linear sensitivity; and (3) enables flexible encoding of domain-specific constraints and assumptions. Employing this framework, we analyze the sensitivity of the most common parity metrics under 3 varieties of classifier across 12 canonical fairness datasets. Our analysis reveals the striking fragility of fairness assessments to even minor dataset biases. We show that causal sensitivity analysis provides a powerful and necessary toolkit for gauging the informativeness of parity metric evaluations.

Live content is unavailable. Log in and register to view live content