Skip to yearly menu bar Skip to main content


Spotlight Poster

Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks

Daogao Liu · Arun Ganesh · Sewoong Oh · Abhradeep Guha Thakurta

Great Hall & Hall B1+B2 (level 1) #1619
[ ]
Wed 13 Dec 3 p.m. PST — 5 p.m. PST

Abstract:

We reconsider the challenge of non-convex optimization under differential privacy constraint. Building upon the previous variance-reduced algorithm SpiderBoost, we propose a novel framework that employs two types of gradient oracles: one that estimates the gradient at a single point and a more cost-effective option that calculates the gradient difference between two points. Our framework can ensure continuous accuracy of gradient estimations and subsequently enhances the rates of identifying second-order stationary points.Additionally, we consider a more challenging task by attempting to locate the global minima of a non-convex objective via the exponential mechanism without almost any assumptions. Our preliminary results suggest that the regularized exponential mechanism can effectively emulate previous empirical and population risk bounds, negating the need for smoothness assumptions for algorithms with polynomial running time. Furthermore, with running time factors excluded, the exponential mechanism demonstrates promising population risk bound performance, and we provide a nearly matching lower bound.

Chat is not available.