Timezone: »

Characterizing the risk of fairwashing
Ulrich Aïvodji · Hiromi Arai · Sébastien Gambs · Satoshi Hara

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @ None #None

Fairwashing refers to the risk that an unfair black-box model can be explained by a fairer model through post-hoc explanation manipulation. In this paper, we investigate the capability of fairwashing attacks by analyzing their fidelity-unfairness trade-offs. In particular, we show that fairwashed explanation models can generalize beyond the suing group (i.e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model. We also demonstrate that fairwashing attacks can transfer across black-box models, meaning that other black-box models can perform fairwashing without explicitly using their predictions. This generalization and transferability of fairwashing attacks imply that their detection will be difficult in practice. Finally, we propose an approach to quantify the risk of fairwashing, which is based on the computation of the range of the unfairness of high-fidelity explainers.

Author Information

Ulrich Aïvodji (ÉTS)
Hiromi Arai (RIKEN)
Sébastien Gambs (Université du Québec à Montréal)
Satoshi Hara (Osaka University)