Timezone: »
Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses & Extension to Non-Convex Losses
Andrew Lowy · Meisam Razaviyayn
Event URL: https://openreview.net/forum?id=gvX2Oc_KU8o »
We study differentially private (DP) stochastic optimization (SO) with data containing outliers and loss functions that are (possibly) not Lipschitz continuous. To date, the vast majority of work on DP SO assumes that the loss is uniformly Lipschitz over data (i.e. stochastic gradients are uniformly bounded over all data points). While this assumption is convenient, it is often unrealistic: in many practical problems, the loss function may not be uniformly Lipschitz. Even when the loss function is Lipschitz continuous, the worst-case Lipschitz parameter of the loss over all data points may be extremely large due to outliers. In such cases, the error bounds for DP SO, which scale with the worst-case Lipschitz parameter of the loss, are vacuous. To address these limitations, this work does not require the loss function to be uniformly Lipschitz. Instead, building on a recent line of work (Wang et al., 2020; Kamath et al., 2022), we make the weaker assumption that stochastic gradients have bounded $k$-th order moments for some $k \geq 2$. Compared with works on DP Lipschitz SO, our excess risk scales with the $k$-th moment bound instead of the Lipschitz parameter of the loss, allowing for significantly faster rates in the presence of outliers. For convex and strongly convex loss functions, we provide the first asymptotically optimal excess risk bounds (up to a logarithmic factor). In contrast to the prior works, our bounds do not require the loss function to be differentiable/smooth. We also devise an accelerated algorithm for smooth losses that runs in linear time and has excess risk that is tight in certain practical parameter regimes. Additionally, our work is the first to address non-convex non-Lipschitz loss functions satisfying the Proximal-PL inequality; this covers some practical machine learning models. Our Proximal-PL algorithm has near-optimal excess risk.
We study differentially private (DP) stochastic optimization (SO) with data containing outliers and loss functions that are (possibly) not Lipschitz continuous. To date, the vast majority of work on DP SO assumes that the loss is uniformly Lipschitz over data (i.e. stochastic gradients are uniformly bounded over all data points). While this assumption is convenient, it is often unrealistic: in many practical problems, the loss function may not be uniformly Lipschitz. Even when the loss function is Lipschitz continuous, the worst-case Lipschitz parameter of the loss over all data points may be extremely large due to outliers. In such cases, the error bounds for DP SO, which scale with the worst-case Lipschitz parameter of the loss, are vacuous. To address these limitations, this work does not require the loss function to be uniformly Lipschitz. Instead, building on a recent line of work (Wang et al., 2020; Kamath et al., 2022), we make the weaker assumption that stochastic gradients have bounded $k$-th order moments for some $k \geq 2$. Compared with works on DP Lipschitz SO, our excess risk scales with the $k$-th moment bound instead of the Lipschitz parameter of the loss, allowing for significantly faster rates in the presence of outliers. For convex and strongly convex loss functions, we provide the first asymptotically optimal excess risk bounds (up to a logarithmic factor). In contrast to the prior works, our bounds do not require the loss function to be differentiable/smooth. We also devise an accelerated algorithm for smooth losses that runs in linear time and has excess risk that is tight in certain practical parameter regimes. Additionally, our work is the first to address non-convex non-Lipschitz loss functions satisfying the Proximal-PL inequality; this covers some practical machine learning models. Our Proximal-PL algorithm has near-optimal excess risk.
Author Information
Andrew Lowy (USC)
Meisam Razaviyayn (University of Southern California)
More from the Same Authors
-
2021 : Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses »
Andrew Lowy · Meisam Razaviyayn -
2022 : Policy gradient finds global optimum of nearly linear-quadratic control systems »
Yinbin Han · Meisam Razaviyayn · Renyuan Xu -
2022 : A Stochastic Optimization Framework for Fair Risk Minimization »
Andrew Lowy · Sina Baharlouei · Rakesh Pavan · Meisam Razaviyayn · Ahmad Beirami -
2022 : Improving Adversarial Robustness via Joint Classification and Multiple Explicit Detection Classes »
Sina Baharlouei · Fatemeh Sheikholeslami · Meisam Razaviyayn · J. Zico Kolter -
2022 : Stochastic Differentially Private and Fair Learning »
Andrew Lowy · Devansh Gupta · Meisam Razaviyayn -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2020 Poster: Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems »
Songtao Lu · Meisam Razaviyayn · Bo Yang · Kejun Huang · Mingyi Hong -
2020 Spotlight: Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems »
Songtao Lu · Meisam Razaviyayn · Bo Yang · Kejun Huang · Mingyi Hong -
2019 Poster: Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods »
Maher Nouiehed · Maziar Sanjabi · Tianjian Huang · Jason Lee · Meisam Razaviyayn -
2018 Poster: On the Convergence and Robustness of Training GANs with Regularized Optimal Transport »
Maziar Sanjabi · Jimmy Ba · Meisam Razaviyayn · Jason Lee -
2017 Poster: On Optimal Generalizability in Parametric Learning »
Ahmad Beirami · Meisam Razaviyayn · Shahin Shahrampour · Vahid Tarokh