Skip to yearly menu bar Skip to main content


Poster

Smoothly Bounding User Contributions in Differential Privacy

Alessandro Epasto · Mohammad Mahdian · Jieming Mao · Vahab Mirrokni · Lijie Ren

Poster Session 1 #257

Keywords: [ Hardness of Learning and Approxi ] [ Deep Learning -> Optimization for Deep Networks; Theory -> Computational Complexity; Theory ] [ Spaces of Functions and Kernels ] [ Theory ]


Abstract:

A differentially private algorithm guarantees that the input of a single user won’t significantly change the output distribution of the algorithm. When a user contributes more data points, more information can be collected to improve the algorithm’s performance. But at the same time, more noise might need to be added to the algorithm in order to keep the algorithm differentially private and this might hurt the algorithm’s performance. Amin et al. (2019) initiates the study on bounding user contributions and proposes a very natural algorithm which limits the number of samples each user can contribute by a threshold.

For a better trade-off between utility and privacy guarantee, we propose a method which smoothly bounds user contributions by setting appropriate weights on data points and apply it to estimating the mean/quantiles, linear regression, and empirical risk minimization. We show that our algorithm provably outperforms the sample limiting algorithm. We conclude with experimental evaluations which validate our theoretical results.

Chat is not available.