Skip to yearly menu bar Skip to main content


Plenary Speaker
in
Workshop: OPT 2021: Optimization for Machine Learning

Non-Euclidean Differentially Private Stochastic Convex Optimization, Cristóbal Guzmán

Cristóbal Guzmán


Abstract: Abstract: Ensuring privacy of users' data in machine learning models has become a crucial requirement in multiple domains. In this respect, differential privacy (DP) is the gold standard, due to its general and rigorous privacy guarantees, as well as its high composability. For the particular case of stochastic convex optimization (SCO), recent efforts have established optimal rates for the excess risk under differential privacy in Euclidean setups. These bounds suffer a polynomial degradation of accuracy with respect to the dimension, which limits their applicability in high-dimensional settings. In this talk, I will present nearly-dimension independent rates on the excess risk for DP-SCO in the $\ell_1$ setup, as well as the investigation of more general $\ell_p$ setups, where $1\leq p\leq \infty$. Based on joint work with Raef Bassily and Anupama Nandi.