Timezone: »
A major concern with the use of machine learning (ML) models for high-stakes decision-making (e.g criminal sentencing or commercial lending) is that these models sometimes discriminate against certain demographic groups (e.g. race, gender, age). Fair learning algorithms have been developed to address this issue, but these algorithms can still leak sensitive information (e.g. race, gender, age). Differential privacy (DP) guarantees that sensitive data cannot be leaked. Existing algorithms for DP fair learning are impractical for training large-scale models since they either: a) require computations on the full data set in each iteration of training; or b) are not guaranteed to converge. In this paper, we provide the first efficient differentially private algorithm for fair learning that is guaranteed to converge, even when minibatches of data are used (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions (e.g. demographic parity, equalized odds) and non-binary classification with multiple (non-binary) sensitive attributes. Along the way, we provide the first utility guarantee for a DP algorithm for solving nonconvex-strongly concave min-max problems. Extensive numerical experiments show that our algorithm consistently offers significant performance gains vs. state-of-the-art DP fair baselines. Moreover, our algorithm is amenable to large-scale ML with non-binary targets and non-binary sensitive attributes.
Author Information
Andrew Lowy (USC)
Devansh Gupta (Indraprastha Institute of Information Technology, Delhi)
Meisam Razaviyayn (University of Southern California)
More from the Same Authors
-
2021 : Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses »
Andrew Lowy · Meisam Razaviyayn -
2022 : Policy gradient finds global optimum of nearly linear-quadratic control systems »
Yinbin Han · Meisam Razaviyayn · Renyuan Xu -
2022 : Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses & Extension to Non-Convex Losses »
Andrew Lowy · Meisam Razaviyayn -
2022 : A Stochastic Optimization Framework for Fair Risk Minimization »
Andrew Lowy · Sina Baharlouei · Rakesh Pavan · Meisam Razaviyayn · Ahmad Beirami -
2022 : Improving Adversarial Robustness via Joint Classification and Multiple Explicit Detection Classes »
Sina Baharlouei · Fatemeh Sheikholeslami · Meisam Razaviyayn · J. Zico Kolter -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2020 Poster: Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems »
Songtao Lu · Meisam Razaviyayn · Bo Yang · Kejun Huang · Mingyi Hong -
2020 Spotlight: Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems »
Songtao Lu · Meisam Razaviyayn · Bo Yang · Kejun Huang · Mingyi Hong -
2019 Poster: Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods »
Maher Nouiehed · Maziar Sanjabi · Tianjian Huang · Jason Lee · Meisam Razaviyayn -
2018 Poster: On the Convergence and Robustness of Training GANs with Regularized Optimal Transport »
Maziar Sanjabi · Jimmy Ba · Meisam Razaviyayn · Jason Lee -
2017 Poster: On Optimal Generalizability in Parametric Learning »
Ahmad Beirami · Meisam Razaviyayn · Shahin Shahrampour · Vahid Tarokh