Timezone: »
Despite the success of large-scale empirical risk minimization (ERM) at achieving high accuracy across a variety of machine learning tasks, fair ERM is hindered by the incompatibility of fairness constraints with stochastic optimization. We consider the problem of fair classification with discrete sensitive attributes and potentially large models and data sets, requiring stochastic solvers. Existing in-processing fairness algorithms are either impractical in the large-scale setting because they require large batches of data at each iteration or they are not guaranteed to converge. In this paper, we develop the first stochastic in-processing fairness algorithm with guaranteed convergence. For demographic parity, equalized odds, and equal opportunity notions of fairness, we provide slight variations of our algorithm--called FERMI--and prove that each of these variations converges in stochastic optimization with any batch size. Empirically, we show that FERMI is amenable to stochastic solvers with multiple (non-binary) sensitive attributes and non-binary targets, performing well even with minibatch size as small as one. Extensive experiments show that FERMI achieves the most favorable tradeoffs between fairness violation and test accuracy across all tested setups compared with state-of-the-art baselines for demographic parity, equalized odds, equal opportunity. These benefits are especially significant with small batch sizes and for non-binary classification with large number of sensitive attributes, making FERMI a practical, scalable fairness algorithm.
Author Information
Andrew Lowy (USC)
Sina Baharlouei (UNIVERSITY OF SOUTHERN CALIFORNIA)
Rakesh Pavan (National Institute of Technology Karnataka)
Meisam Razaviyayn (University of Southern California)
Ahmad Beirami (Google Research)
Ahmad Beirami is a research scientist at Facebook AI, leading research to power the next generation of virtual digital assistants with AR/VR capabilities. His research broadly involves learning models with robustness and fairness considerations in large-scale systems. Prior to that, he led the AI agent research program for automated playtesting of video games at Electronic Arts. Before moving to industry in 2018, he held a joint postdoctoral fellow position at Harvard & MIT, focused on problems in the intersection of core machine learning and information theory. He is the recipient of the 2015 Sigma Xi Best PhD Thesis Award from Georgia Tech, for his work on the fundamental limits of efficient communication over IoT networks.
More from the Same Authors
-
2021 : Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses »
Andrew Lowy · Meisam Razaviyayn -
2021 : FeO2: Federated Learning with Opt-Out Differential Privacy »
Nasser Aldaghri · Hessam Mahdavifar · Ahmad Beirami -
2022 : Policy gradient finds global optimum of nearly linear-quadratic control systems »
Yinbin Han · Meisam Razaviyayn · Renyuan Xu -
2022 : Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses & Extension to Non-Convex Losses »
Andrew Lowy · Meisam Razaviyayn -
2022 : Improving Adversarial Robustness via Joint Classification and Multiple Explicit Detection Classes »
Sina Baharlouei · Fatemeh Sheikholeslami · Meisam Razaviyayn · J. Zico Kolter -
2023 Poster: Uncovering the Hidden Dynamics of Video Self-supervised Learning under Distribution Shifts »
Pritam Sarkar · Ahmad Beirami · Ali Etemad -
2023 Poster: SpecTr: Fast Speculative Decoding via Optimal Transport »
Ziteng Sun · Ananda Theertha Suresh · Jae Hun Ro · Ahmad Beirami · Himanshu Jain · Felix Yu -
2023 Workshop: Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo) »
Ananth Balashankar · Saurabh Garg · Jindong Gu · Amrith Setlur · Yao Qin · Aditi Raghunathan · Ahmad Beirami -
2022 : Stochastic Differentially Private and Fair Learning »
Andrew Lowy · Devansh Gupta · Meisam Razaviyayn -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2020 Poster: Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems »
Songtao Lu · Meisam Razaviyayn · Bo Yang · Kejun Huang · Mingyi Hong -
2020 Spotlight: Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems »
Songtao Lu · Meisam Razaviyayn · Bo Yang · Kejun Huang · Mingyi Hong -
2019 Poster: Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods »
Maher Nouiehed · Maziar Sanjabi · Tianjian Huang · Jason Lee · Meisam Razaviyayn -
2018 : Spotlights & Poster Session »
James A Preiss · Alexander Grishin · Ville Kyrki · Pol Moreno Comellas · Akshay Narayan · Tze-Yun Leong · Yongxi Tan · Lilian Weng · Toshiharu Sugawara · Kenny Young · Tianmin Shu · Jonas Gehring · Ahmad Beirami · Chris Amato · sammie katt · Andrea Baisero · Arseny Kuznetsov · Jan Humplik · Vladimír Petrík -
2018 Poster: On the Convergence and Robustness of Training GANs with Regularized Optimal Transport »
Maziar Sanjabi · Jimmy Ba · Meisam Razaviyayn · Jason Lee -
2017 Poster: On Optimal Generalizability in Parametric Learning »
Ahmad Beirami · Meisam Razaviyayn · Shahin Shahrampour · Vahid Tarokh