Timezone: »

 
Poster
Inverting Gradients - How easy is it to break privacy in federated learning?
Jonas Geiping · Hartmut Bauermeister · Hannah Dröge · Michael Moeller

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1706

The idea of federated learning is to collaboratively train a neural network on a server. Each user receives the current weights of the network and in turns sends parameter updates (gradients) based on local data. This protocol has been designed not only to train neural networks data-efficiently, but also to provide privacy benefits for users, as their input data remains on device and only parameter gradients are shared. But how secure is sharing parameter gradients? Previous attacks have provided a false sense of security, by succeeding only in contrived settings - even for a single image. However, by exploiting a magnitude-invariant loss along with optimization strategies based on adversarial attacks, we show that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and demonstrate that such a break of privacy is possible even for trained deep networks. We analyze the effects of architecture as well as parameters on the difficulty of reconstructing an input image and prove that any input to a fully connected layer can be reconstructed analytically independent of the remaining architecture. Finally we discuss settings encountered in practice and show that even aggregating gradients over several iterations or several images does not guarantee the user's privacy in federated learning applications.

Author Information

Jonas Geiping (University of Siegen)

Hello, I’m Jonas . I conduct research in computer science as postdoc at the University of Maryland. My background is in Mathematics, more specifically in mathematical optimization and I am interested in research that intersects current deep learning and mathematical optimization, with my main area of applications being computer vision.

Hartmut Bauermeister (University of Siegen)
Hannah Dröge (University of Siegen)
Michael Moeller (University of Siegen)

More from the Same Authors

  • 2023 Poster: Kissing to Find a Match: Efficient Low-Rank Permutation Representation »
    Hannah Dröge · Zorah Lähner · Yuval Bahat · Onofre Martorell Nadal · Felix Heide · Michael Moeller
  • 2021 Poster: Adversarial Examples Make Strong Poisons »
    Liam Fowl · Micah Goldblum · Ping-yeh Chiang · Jonas Geiping · Wojciech Czaja · Tom Goldstein
  • 2020 Poster: MetaPoison: Practical General-purpose Clean-label Data Poisoning »
    W. Ronny Huang · Jonas Geiping · Liam Fowl · Gavin Taylor · Tom Goldstein
  • 2019 : Poster Session »
    Jonathan Scarlett · Piotr Indyk · Ali Vakilian · Adrian Weller · Partha P Mitra · Benjamin Aubin · Bruno Loureiro · Florent Krzakala · Lenka Zdeborová · Kristina Monakhova · Joshua Yurtsever · Laura Waller · Hendrik Sommerhoff · Michael Moeller · Rushil Anirudh · Shuang Qiu · Xiaohan Wei · Zhuoran Yang · Jayaraman Thiagarajan · Salman Asif · Michael Gillhofer · Johannes Brandstetter · Sepp Hochreiter · Felix Petersen · Dhruv Patel · Assad Oberai · Akshay Kamath · Sushrut Karmalkar · Eric Price · Ali Ahmed · Zahra Kadkhodaie · Sreyas Mohan · Eero Simoncelli · Carlos Fernandez-Granda · Oscar Leong · Wesam Sakla · Rebecca Willett · Stephan Hoyer · Jascha Sohl-Dickstein · Sam Greydanus · Gauri Jagatap · Chinmay Hegde · Michael Kellman · Jonathan Tamir · Nouamane Laanait · Ousmane Dia · Mirco Ravanelli · Jonathan Binas · Negar Rostamzadeh · Shirin Jalali · Tiantian Fang · Alex Schwing · Sébastien Lachapelle · Philippe Brouillard · Tristan Deleu · Simon Lacoste-Julien · Stella Yu · Arya Mazumdar · Ankit Singh Rawat · Yue Zhao · Jianshu Chen · Xiaoyang Li · Hubert Ramsauer · Gabrio Rizzuti · Nikolaos Mitsakos · Dingzhou Cao · Thomas Strohmer · Yang Li · Pei Peng · Gregory Ongie