Timezone: »
Many instances of algorithmic bias are caused by distributional shifts. For example, machine learning (ML) models often perform worse on demographic groups that are underrepresented in the training data. In this paper, we leverage this connection between algorithmic fairness and distribution shifts to show that algorithmic fairness interventions can help ML models overcome distribution shifts, and that domain adaptation methods (for overcoming distribution shifts) can mitigate algorithmic biases. In particular, we show that (i) enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models under the covariate shift assumption and that (ii) it is possible to adapt representation alignment methods for domain adaptation to enforce individual fairness. The former is unexpected because IF interventions were not developed with distribution shifts in mind. The latter is also unexpected because representation alignment is not a common approach in the individual fairness literature.
Author Information
Debarghya Mukherjee (University of Michigan)
Felix Petersen (Stanford University)
Mikhail Yurochkin (IBM Research, MIT-IBM Watson AI Lab)
I am a Research Staff Member at IBM Research and MIT-IBM Watson AI Lab in Cambridge, Massachusetts. My research interests are - Algorithmic Fairness - Out-of-Distribution Generalization - Applications of Optimal Transport in Machine Learning - Model Fusion and Federated Learning Before joining IBM, I completed my PhD in Statistics at the University of Michigan, where I worked with Long Nguyen. I received my Bachelor's degree in applied mathematics and physics from Moscow Institute of Physics and Technology.
Yuekai Sun (University of Michigan)
More from the Same Authors
-
2021 : Measuring the sensitivity of Gaussian processes to kernel choice »
Will Stephenson · Soumya Ghosh · Tin Nguyen · Mikhail Yurochkin · Sameer Deshpande · Tamara Broderick -
2022 Poster: Deep Differentiable Logic Gate Networks »
Felix Petersen · Christian Borgelt · Hilde Kuehne · Oliver Deussen -
2022 Poster: Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees »
Songkai Xue · Yuekai Sun · Mikhail Yurochkin -
2021 Poster: Does enforcing fairness mitigate biases caused by subpopulation shift? »
Subha Maity · Debarghya Mukherjee · Mikhail Yurochkin · Yuekai Sun -
2021 Poster: Post-processing for Individual Fairness »
Felix Petersen · Debarghya Mukherjee · Yuekai Sun · Mikhail Yurochkin -
2021 Poster: On sensitivity of meta-learning to support data »
Mayank Agarwal · Mikhail Yurochkin · Yuekai Sun -
2020 Poster: Continuous Regularized Wasserstein Barycenters »
Lingxiao Li · Aude Genevay · Mikhail Yurochkin · Justin Solomon -
2020 Demonstration: IBM Federated Learning Community Edition: An Interactive Demonstration »
Laura Wynter · Chaitanya Kumar · Pengqian Yu · Mikhail Yurochkin · Amogh Tarcar -
2019 Poster: Alleviating Label Switching with Optimal Transport »
Pierre Monteiller · Sebastian Claici · Edward Chien · Farzaneh Mirzazadeh · Justin Solomon · Mikhail Yurochkin -
2019 Poster: Hierarchical Optimal Transport for Document Representation »
Mikhail Yurochkin · Sebastian Claici · Edward Chien · Farzaneh Mirzazadeh · Justin Solomon -
2019 Poster: Scalable inference of topic evolution via models for latent geometric structures »
Mikhail Yurochkin · Zhiwei Fan · Aritra Guha · Paraschos Koutris · XuanLong Nguyen -
2019 Poster: Statistical Model Aggregation via Parameter Matching »
Mikhail Yurochkin · Mayank Agarwal · Soumya Ghosh · Kristjan Greenewald · Nghia Hoang -
2017 Poster: Conic Scan-and-Cover algorithms for nonparametric topic modeling »
Mikhail Yurochkin · Aritra Guha · XuanLong Nguyen -
2017 Poster: Multi-way Interacting Regression via Factorization Machines »
Mikhail Yurochkin · XuanLong Nguyen · nikolaos Vasiloglou -
2016 Poster: Geometric Dirichlet Means Algorithm for topic inference »
Mikhail Yurochkin · XuanLong Nguyen