Skip to yearly menu bar Skip to main content


Poster

Nearly Tight Black-Box Auditing of Differentially Private Machine Learning

Meenatchi Sundaram Muthu Selva Annamalai · Emiliano De Cristofaro

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: This paper presents a nearly tight auditing procedure for the Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm in the black-box threat model.Our main intuition is to craft worst-case initial model parameters, as DP-SGD's privacy analysis is agnostic to the choice of the initial model parameters.For models trained on MNIST and CIFAR-10 at theoretical $\varepsilon=10.0$, our auditing procedure yields empirical estimates of $\varepsilon_{emp} = 7.21$ and $6.95$, respectively, on a 1,000-record sample and $\varepsilon_{emp} = 6.48$ and $4.96$ on the full datasets.By contrast, previous work only achieved tight audits in stronger (less realistic) white-box models, allowing the adversary to access the model's inner parameters and insert arbitrary gradients.Overall, our auditing procedure can be used to detect bugs and DP violations more easily and offers valuable insight into how the privacy analysis of DP-SGD can be further improved.

Live content is unavailable. Log in and register to view live content