Timezone: »
Poster
In Differential Privacy, There is Truth: on Vote-Histogram Leakage in Ensemble Private Learning
JIAQI WANG · Roei Schuster · I Shumailov · David Lie · Nicolas Papernot
When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns. The canonical Private Aggregation of Teacher Ensembles, or PATE, computes output labels by aggregating the predictions of a (possibly distributed) collection of teacher models via a voting mechanism. The mechanism adds noise to attain a differential privacy guarantee with respect to the teachers' training data. In this work, we observe that this use of noise, which makes PATE predictions stochastic, enables new forms of leakage of sensitive information. For a given input, our adversary exploits this stochasticity to extract high-fidelity histograms of the votes submitted by the underlying teachers. From these histograms, the adversary can learn sensitive attributes of the input such as race, gender, or age. Although this attack does not directly violate the differential privacy guarantee, it clearly violates privacy norms and expectations, and would not be possible $\textit{at all}$ without the noise inserted to obtain differential privacy. In fact, counter-intuitively, the attack $\textbf{becomes easier as we add more noise}$ to provide stronger differential privacy. We hope this encourages future work to consider privacy holistically rather than treat differential privacy as a panacea.
Author Information
JIAQI WANG (University of Toronto)
Roei Schuster (Cornell Tech, Tel Aviv University)
I Shumailov (University of Toronto)
David Lie (University of Toronto)
Nicolas Papernot (University of Toronto and Vector Institute)
More from the Same Authors
-
2020 : Dataset Inference: Ownership Resolution in Machine Learning »
Nicolas Papernot -
2020 : Challenges of Differentially Private Prediction in Healthcare Settings »
Nicolas Papernot -
2022 : Wide Attention Is The Way Forward For Transformers »
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins -
2022 : DARTFormer: Finding The Best Type Of Attention »
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins -
2022 : Invited Talk »
Nicolas Papernot -
2022 : Wide Attention Is The Way Forward For Transformers »
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins -
2022 Poster: Washing The Unwashable : On The (Im)possibility of Fairwashing Detection »
Ali Shahin Shamsabadi · Mohammad Yaghini · Natalie Dullerud · Sierra Wyllie · Ulrich Aïvodji · Aisha Alaagib · Sébastien Gambs · Nicolas Papernot -
2022 Poster: Dataset Inference for Self-Supervised Models »
Adam Dziedzic · Haonan Duan · Muhammad Ahmad Kaleem · Nikita Dhawan · Jonas Guan · Yannis Cattan · Franziska Boenisch · Nicolas Papernot -
2022 Poster: Rapid Model Architecture Adaption for Meta-Learning »
Yiren Zhao · Xitong Gao · I Shumailov · Nicolo Fusi · Robert Mullins -
2022 Poster: On the Limitations of Stochastic Pre-processing Defenses »
Yue Gao · I Shumailov · Kassem Fawaz · Nicolas Papernot -
2022 Poster: The Privacy Onion Effect: Memorization is Relative »
Nicholas Carlini · Matthew Jagielski · Chiyuan Zhang · Nicolas Papernot · Andreas Terzis · Florian Tramer -
2021 Poster: Manipulating SGD with Data Ordering Attacks »
I Shumailov · Zakhar Shumaylov · Dmitry Kazhdan · Yiren Zhao · Nicolas Papernot · Murat Erdogdu · Ross J Anderson -
2020 Poster: De-Anonymizing Text by Fingerprinting Language Generation »
Zhen Sun · Roei Schuster · Vitaly Shmatikov -
2020 Spotlight: De-Anonymizing Text by Fingerprinting Language Generation »
Zhen Sun · Roei Schuster · Vitaly Shmatikov