Timezone: »
Reconstructing Training Data with Informed Adversaries
Borja Balle · Giovanni Cherubin · Jamie Hayes
Event URL: https://openreview.net/forum?id=Yi2DZTbnBl4 »
Given access to a machine learning model, can an adversary reconstruct the model’s training data? This work proposes a formal threat model to study this question, shows that reconstruction attacks are feasible in theory and in practice, and presents preliminary results assessing how different factors of standard machine learning pipelines affect the success of reconstruction. Finally, we empirically evaluate what levels of differential privacy suffice to prevent reconstruction attacks.
Author Information
Borja Balle (DeepMind)
Giovanni Cherubin (Alan Turing Institute)
Jamie Hayes (University College London)
More from the Same Authors
-
2021 Workshop: Privacy in Machine Learning (PriML) 2021 »
Yu-Xiang Wang · Borja Balle · Giovanni Cherubin · Kamalika Chaudhuri · Antti Honkela · Jonathan Lebensold · Casey Meehan · Mi Jung Park · Adrian Weller · Yuqing Zhu -
2018 Poster: Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences »
Borja Balle · Gilles Barthe · Marco Gaboardi -
2018 Poster: Contamination Attacks and Mitigation in Multi-Party Machine Learning »
Jamie Hayes · Olga Ohrimenko -
2017 Poster: Generating steganographic images via adversarial training »
Jamie Hayes · George Danezis -
2016 Workshop: Private Multi-Party Machine Learning »
Borja Balle · Aurélien Bellet · David Evans · Adrià Gascón