Skip to yearly menu bar Skip to main content


Poster

Private Stochastic Convex Optimization with Optimal Rates

Raef Bassily · Vitaly Feldman · Kunal Talwar · Abhradeep Guha Thakurta

East Exhibition Hall B + C #163

Keywords: [ Learning Theory ] [ Optimization -> Convex Optimization; Optimization -> Stochastic Optimization; Theory ] [ Privacy, Anonymity, and Security ] [ Applications ]


Abstract: We study differentially private (DP) algorithms for stochastic convex optimization (SCO). In this problem the goal is to approximately minimize the population loss given i.i.d.~samples from a distribution over convex and Lipschitz loss functions. A long line of existing work on private convex optimization focuses on the empirical loss and derives asymptotically tight bounds on the excess empirical loss. However a significant gap exists in the known bounds for the population loss. We show that, up to logarithmic factors, the optimal excess population loss for DP algorithms is equal to the larger of the optimal non-private excess population loss, and the optimal excess empirical loss of DP algorithms. This implies that, contrary to intuition based on private ERM, private SCO has asymptotically the same rate of $1/\sqrt{n}$ as non-private SCO in the parameter regime most common in practice. The best previous result in this setting gives rate of $1/n^{1/4}$. Our approach builds on existing differentially private algorithms and relies on the analysis of algorithmic stability to ensure generalization.

Live content is unavailable. Log in and register to view live content