Poster
Differentially Private Online-to-batch for Smooth Losses
Qinzi Zhang · Hoang Tran · Ashok Cutkosky
Hall J (level 1) #1034
Keywords: [ online-to-batch ] [ Convex Optimization ] [ differential privacy ] [ Adaptive ] [ Online Learning ]
Abstract:
We develop a new reduction that converts any online convex optimization algorithm suffering O(√T)O(√T) regret into an ϵ-differentially private stochastic convex optimization algorithm with the optimal convergence rate ˜O(1/√T+1/ϵT) on smooth losses in linear time, forming a direct analogy to the classical non-private online-to-batch'' conversion. By applying our techniques to more advanced adaptive online algorithms, we produce adaptive differentially private counterparts whose convergence rates depend on apriori unknown variances or parameter norms.
Chat is not available.