Timezone: »

Fair Wrapping for Black-box Predictions
Alexander Soen · Ibrahim Alabdulmohsin · Sanmi Koyejo · Yishay Mansour · Nyalleng Moorosi · Richard Nock · Ke Sun · Lexing Xie

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #542
We introduce a new family of techniques to post-process (``wrap") a black-box classifier in order to reduce its bias. Our technique builds on the recent analysis of improper loss functions whose optimization can correct any twist in prediction, unfairness being treated as a twist. In the post-processing, we learn a wrapper function which we define as an $\alpha$-tree, which modifies the prediction. We provide two generic boosting algorithms to learn $\alpha$-trees. We show that our modification has appealing properties in terms of composition of $\alpha$-trees, generalization, interpretability, and KL divergence between modified and original predictions. We exemplify the use of our technique in three fairness notions: conditional value-at-risk, equality of opportunity, and statistical parity; and provide experiments on several readily available datasets.

Author Information

Alexander Soen (Australian National University)
Ibrahim Alabdulmohsin (Google)
Sanmi Koyejo (Stanford, Google Research)
Sanmi Koyejo

Sanmi Koyejo is an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign and a research scientist at Google AI in Accra. Koyejo's research interests are in developing the principles and practice of adaptive and robust machine learning. Additionally, Koyejo focuses on applications to biomedical imaging and neuroscience. Koyejo co-founded the Black in AI organization and currently serves on its board.

Yishay Mansour (Tel Aviv University & Google)
Nyalleng Moorosi (Google Ghana)
Richard Nock (Data61, the Australian National University and the University of Sydney)
Ke Sun (CSIRO's Data61 and Australian National University)
Lexing Xie (Australian National University)

More from the Same Authors