Poster
Differentially Private Generalized Linear Models Revisited
Raman Arora · Raef Bassily · Cristóbal Guzmán · Michael Menart · Enayat Ullah
Hall J (level 1) #818
Keywords: [ supervised learning ] [ generalized linear model ] [ Optimization ] [ differential privacy ]
Abstract:
We study the problem of (ϵ,δ)-differentially private learning of linear predictors with convex losses. We provide results for two subclasses of loss functions. The first case is when the loss is smooth and non-negative but not necessarily Lipschitz (such as the squared loss). For this case, we establish an upper bound on the excess population risk of ˜O(‖w∗‖√n+min{‖w∗‖2(nϵ)2/3,√d‖w∗‖2nϵ}), where n is the number of samples, d is the dimension of the problem, and w∗ is the minimizer of the population risk. Apart from the dependence on ‖w∗‖, our bound is essentially tight in all parameters. In particular, we show a lower bound of ˜Ω(1√n+min{‖w∗‖4/3(nϵ)2/3,√d‖w∗‖nϵ}). We also revisit the previously studied case of Lipschitz losses \cite{SSTT21}. For this case, we close the gap in the existing work and show that the optimal rate is (up to log factors) Θ(‖w∗‖√n+min{‖w∗‖√nϵ,√rank‖w∗‖nϵ}), where rank is the rank of the design matrix. This improves over existing work in the high privacy regime. Finally, our algorithms involve a private model selection approach that we develop to enable attaining the stated rates without a-priori knowledge of ‖w∗‖.
Chat is not available.