Timezone: »

 
Poster
Deep Leakage from Gradients
Ligeng Zhu · Zhijian Liu · Song Han

Thu Dec 05:00 PM -- 07:00 PM PST @ East Exhibition Hall B + C #154

Passing gradient is a widely used scheme in modern multi-node learning system (e.g, distributed training, collaborative learning). In a long time, people used to believe that gradients are safe to share: i.e, the training set will not be leaked by gradient sharing. However, in this paper, we show that we can obtain the private training set from the publicly shared gradients. The leaking only takes few gradient steps to process and can obtain the original training set instead of look-alike alternatives. We name this leakage as \textit{deep leakage from gradient} and practically validate the effectiveness of our algorithm on both computer vision and natural language processing tasks. We empirically show that our attack is much stronger than previous approaches and thereby and raise people's awareness to rethink the gradients' safety. We also discuss some possible strategies to defend this deep leakage.

Author Information

Ligeng Zhu (MIT)
Zhijian Liu (MIT)
Song Han (MIT)

More from the Same Authors