Timezone: »
Poster
Renyi Differential Privacy of Propose-Test-Release and Applications to Private and Robust Machine Learning
Jiachen T. Wang · Saeed Mahloujifar · Shouda Wang · Ruoxi Jia · Prateek Mittal
Propose-Test-Release (PTR) is a differential privacy framework that works with local sensitivity of functions, instead of their global sensitivity. This framework is typically used for releasing robust statistics such as median or trimmed mean in a differentially private manner. While PTR is a common framework introduced over a decade ago, using it in applications such as robust SGD where we need many adaptive robust queries is challenging. This is mainly due to the lack of \Renyi Differential Privacy (RDP) analysis, an essential ingredient underlying the moments accountant approach for differentially private deep learning. In this work, we generalize the standard PTR and derive the first RDP bound for it. We show that our RDP bound for PTR yields tighter DP guarantees than the directly analyzed $(\varepsilon, \delta)$-DP. We also derive the algorithm-specific privacy amplification bound of PTR under subsampling. We show that our bound is much tighter than the general upper bound and close to the lower bound. Our RDP bounds enable tighter privacy loss calculation for the composition of many adaptive runs of PTR. As an application of our analysis, we show that PTR and our theoretical results can be used to design differentially private variants for byzantine robust training algorithms that use robust statistics for gradients aggregation. We conduct experiments on the settings of label, feature, and gradient corruption across different datasets and architectures. We show that PTR-based private and robust training algorithm significantly improves the utility compared with the baseline.
Author Information
Jiachen T. Wang (Princeton University)
Saeed Mahloujifar (Princeton)
Shouda Wang (Princeton University)
Ruoxi Jia (Virginia Tech)
Prateek Mittal (Princeton University)
More from the Same Authors
-
2021 : RobustBench: a standardized adversarial robustness benchmark »
Francesco Croce · Maksym Andriushchenko · Vikash Sehwag · Edoardo Debenedetti · Nicolas Flammarion · Mung Chiang · Prateek Mittal · Matthias Hein -
2021 : A Novel Self-Distillation Architecture to Defeat Membership Inference Attacks »
Xinyu Tang · Saeed Mahloujifar · Liwei Song · Virat Shejwalkar · Amir Houmansadr · Prateek Mittal -
2022 : Lower Bounds on 0-1 Loss for Multi-class Classification with a Test-time Attacker »
Sihui Dai · Wenxin Ding · Arjun Nitin Bhagoji · Daniel Cullina · Prateek Mittal · Ben Zhao -
2022 Poster: Formulating Robustness Against Unforeseen Attacks »
Sihui Dai · Saeed Mahloujifar · Prateek Mittal -
2022 Poster: Overparameterization from Computational Constraints »
Sanjam Garg · Somesh Jha · Saeed Mahloujifar · Mohammad Mahmoody · Mingyuan Wang -
2022 Poster: CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks »
Xuanli He · Qiongkai Xu · Yi Zeng · Lingjuan Lyu · Fangzhao Wu · Jiwei Li · Ruoxi Jia -
2022 Poster: Understanding Robust Learning through the Lens of Representation Similarities »
Christian Cianfarani · Arjun Nitin Bhagoji · Vikash Sehwag · Ben Zhao · Heather Zheng · Prateek Mittal -
2020 Poster: HYDRA: Pruning Adversarially Robust Neural Networks »
Vikash Sehwag · Shiqi Wang · Prateek Mittal · Suman Jana -
2019 Poster: Lower Bounds on Adversarial Robustness from Optimal Transport »
Arjun Nitin Bhagoji · Daniel Cullina · Prateek Mittal -
2018 Poster: PAC-learning in the presence of adversaries »
Daniel Cullina · Arjun Nitin Bhagoji · Prateek Mittal