`

Timezone: »

 
Maintaining fairness across distribution shifts: do we have viable solutions for real-world applications?
Jessica Schrouff · Natalie Harris · Sanmi Koyejo · Ibrahim Alabdulmohsin · Eva Schnider · Diana Mincu · Christina Chen · Awa Dieng · Yuan Liu · Vivek Natarajan · Katherine Heller · Alexander D'Amour
Event URL: https://eventhosts.gather.town/app/kR7ip0Bhhn8BXuMD/wiml-workshop-2021 »

Fairness and robustness are often considered as orthogonal dimensions to evaluate machine learning on. Recent evidence has however displayed that fairness guarantees are not transferable across environments. In healthcare settings, this can result in e.g. a model that performs fairly (according to a selected metric) in hospital A showing unfairness when deployed in hospital B. Here we illustrate how fairness metrics may change under distribution shift using 2 real-world applications in Electronic Health Records (EHR) and Dermatology. We further show that clinically plausible shifts simultaneously affect multiple parts of the data generation process through a causal analysis. Such complex shifts invalidate most assumptions required by current mitigation techniques, which typically target either covariate or label shift. Our work hence displays a technical gap to a realistic problem and hopes to elicit further research at the intersection of fairness and robustness in real-world applications.

Author Information

Jessica Schrouff (Google Research)
Natalie Harris (Google)
Sanmi Koyejo (UIUC)
Ibrahim Alabdulmohsin (Google Research)
Eva Schnider (University of Basel)
Diana Mincu (Google)
Christina Chen (Google)
Awa Dieng (Google)
Yuan Liu (google inc)
Vivek Natarajan (Google Brain)

Researcher working at the intersection of AI and healthcare at Google. Research interests include improving data efficiency, robustness, generalization, safety, fairness and privacy of AI systems.

Katherine Heller (Google)
Alexander D'Amour (Google Brain)

More from the Same Authors