Timezone: »

Transferring Fairness under Distribution Shifts via Fair Consistency Regularization
Bang An · Zora Che · Mucong Ding · Furong Huang

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #441

The increasing reliance on ML models in high-stakes tasks has raised a major concern on fairness violations. Although there has been a surge of work that improves algorithmic fairness, most of them are under the assumption of an identical training and test distribution. In many real-world applications, however, such an assumption is often violated as previously trained fair models are often deployed in a different environment, and the fairness of such models has been observed to collapse. In this paper, we study how to transfer model fairness under distribution shifts, a widespread issue in practice. We conduct a fine-grained analysis of how the fair model is affected under different types of distribution shifts, and find that domain shifts are more challenging than subpopulation shifts. Inspired by the success of self-training in transferring accuracy under domain shifts, we derive a sufficient condition for transferring group fairness. Guided by it, we propose a practical algorithm with a fair consistency regularization as the key component. A synthetic dataset benchmark, which covers all types of distribution shifts, is deployed for experimental verification of the theoretical findings. Experiments on synthetic and real datasets including image and tabular data demonstrate that our approach effectively transfers fairness and accuracy under various types of distribution shifts.

Author Information

Bang An (University of Maryland, College Park)
Zora Che
Mucong Ding (Department of Computer Science, University of Maryland, College Park)
Furong Huang (University of Maryland)

More from the Same Authors