Timezone: »
Federated learning is an established method for training machine learning models without sharing training data. However, recent work has shown that it cannot guarantee data privacy as shared gradients can still leak sensitive information. To formalize the problem of gradient leakage, we propose a theoretical framework that enables, for the first time, analysis of the Bayes optimal adversary phrased as an optimization problem. We demonstrate that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the data and its respective gradients. Our experiments confirm the effectiveness of the Bayes optimal adversary when it has knowledge of the underlying distribution. Further, our experimental evaluation shows that several existing heuristic defenses are not effective against stronger attacks, especially early in the training process. Thus, our findings indicate that the construction of more effective defenses and their evaluation remains an open problem.
Author Information
Mislav Balunovic (ETH Zurich)
Dimitar Dimitrov (ETH Zürich)
Martin Vechev (ETH Zurich, Switzerland)
More from the Same Authors
-
2022 : Efficient Robustness Verification of Neural Ordinary Differential Equations »
Mustafa Zeqiri · Mark Müller · Marc Fischer · Martin Vechev -
2022 : Generating Intuitive Fairness Specifications for Natural Language Processing »
Florian E. Dorner · Momchil Peychev · Nikola Konstantinov · Naman Goel · Elliott Ash · Martin Vechev -
2022 : Just Avoid Robust Inaccuracy: Boosting Robustness Without Sacrificing Accuracy »
Yannick Merkli · Pavol Bielik · Petar Tsankov · Martin Vechev -
2022 : Certified Training: Small Boxes are All You Need »
Mark Müller · Franziska Eckert · Marc Fischer · Martin Vechev -
2022 : FARE: Provably Fair Representation Learning »
Nikola Jovanović · Mislav Balunovic · Dimitar Dimitrov · Martin Vechev -
2022 Poster: Learning to Configure Computer Networks with Neural Algorithmic Reasoning »
Luca Beurer-Kellner · Martin Vechev · Laurent Vanbever · Petar Veličković -
2022 Poster: (De-)Randomized Smoothing for Decision Stump Ensembles »
Miklós Horváth · Mark Müller · Marc Fischer · Martin Vechev -
2022 Poster: LAMP: Extracting Text from Gradients with Language Model Priors »
Mislav Balunovic · Dimitar Dimitrov · Nikola Jovanović · Martin Vechev -
2021 Poster: Automated Discovery of Adaptive Attacks on Adversarial Defenses »
Chengyuan Yao · Pavol Bielik · Petar Tsankov · Martin Vechev -
2020 Poster: Learning Certified Individually Fair Representations »
Anian Ruoss · Mislav Balunovic · Marc Fischer · Martin Vechev -
2020 Poster: Certified Defense to Image Transformations via Randomized Smoothing »
Marc Fischer · Maximilian Baader · Martin Vechev -
2019 Poster: Beyond the Single Neuron Convex Barrier for Neural Network Certification »
Gagandeep Singh · Rupanshu Ganvir · Markus Püschel · Martin Vechev -
2019 Poster: Certifying Geometric Robustness of Neural Networks »
Mislav Balunovic · Maximilian Baader · Gagandeep Singh · Timon Gehr · Martin Vechev -
2018 Poster: Learning to Solve SMT Formulas »
Mislav Balunovic · Pavol Bielik · Martin Vechev -
2018 Oral: Learning to Solve SMT Formulas »
Mislav Balunovic · Pavol Bielik · Martin Vechev -
2018 Poster: Fast and Effective Robustness Certification »
Gagandeep Singh · Timon Gehr · Matthew Mirman · Markus Püschel · Martin Vechev