Skip to yearly menu bar Skip to main content


Poster

DP-SGD with Fixed-Size Minibatches: Tighter Guarantees with or without Replacement

Jeremiah Birrell · Reza Ebrahimi · Rouzbeh Behnia · Jason Pacheco

[ ] [ Project Page ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: Differentially private stochastic gradient descent (DP-SGD) has been instrumental in privately training deep learning models by providing a framework to control and track the privacy loss incurred during training that is naturally characterized by R{\'e}nyi differential privacy (RDP). At the core of this computation lies a subsampling method that uses a privacy amplification lemma to enhance the privacy guarantees provided by the additive noise. Fixed size subsampling is appealing for its constant memory usage, unlike the variable sized minibatches in Poisson subsampling. It is also of interest in addressing class imbalance and federated learning. However, the current theoretical guarantees of fixed-size subsampling are not tight. We present a new privacy accountant for DP-SGD with fixed-size subsampled RDP without and with replacement. The former improves on the best current bound (Wang et al., 2019) by a factor of $4$. The latter includes non-asymptotic upper and lower bounds and, to the authors' knowledge, is the first such analysis of fixed-size RDP with replacement for DP-SGD. We analytically and empirically compare fixed size and Poisson subsampling, and show that DP-SGD gradients in a fixed-size subsampling regime exhibit lower variance in practice in addition to memory usage benefits.

Live content is unavailable. Log in and register to view live content