Timezone: »
Passing gradient is a widely used scheme in modern multi-node learning system (e.g, distributed training, collaborative learning). In a long time, people used to believe that gradients are safe to share: i.e, the training set will not be leaked by gradient sharing. However, in this paper, we show that we can obtain the private training set from the publicly shared gradients. The leaking only takes few gradient steps to process and can obtain the original training set instead of look-alike alternatives. We name this leakage as \textit{deep leakage from gradient} and practically validate the effectiveness of our algorithm on both computer vision and natural language processing tasks. We empirically show that our attack is much stronger than previous approaches and thereby and raise people's awareness to rethink the gradients' safety. We also discuss some possible strategies to defend this deep leakage.
Author Information
Ligeng Zhu (MIT)
Zhijian Liu (MIT)
Song Han (MIT)
More from the Same Authors
-
2022 : SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models »
Song Han -
2022 Poster: Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models »
Muyang Li · Ji Lin · Chenlin Meng · Stefano Ermon · Song Han · Jun-Yan Zhu -
2022 Poster: On-Device Training Under 256KB Memory »
Ji Lin · Ligeng Zhu · Wei-Ming Chen · Wei-Chen Wang · Chuang Gan · Song Han -
2021 Poster: Memory-efficient Patch-based Inference for Tiny Deep Learning »
Ji Lin · Wei-Ming Chen · Han Cai · Chuang Gan · Song Han -
2021 Poster: Delayed Gradient Averaging: Tolerate the Communication Latency for Federated Learning »
Ligeng Zhu · Hongzhou Lin · Yao Lu · Yujun Lin · Song Han -
2020 Poster: MCUNet: Tiny Deep Learning on IoT Devices »
Ji Lin · Wei-Ming Chen · Yujun Lin · john cohn · Chuang Gan · Song Han -
2020 Spotlight: MCUNet: Tiny Deep Learning on IoT Devices »
Ji Lin · Wei-Ming Chen · Yujun Lin · john cohn · Chuang Gan · Song Han -
2020 Poster: Differentiable Augmentation for Data-Efficient GAN Training »
Shengyu Zhao · Zhijian Liu · Ji Lin · Jun-Yan Zhu · Song Han -
2020 Poster: TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning »
Han Cai · Chuang Gan · Ligeng Zhu · Song Han -
2019 : Hardware-aware Neural Architecture Design for Small and Fast Models: from 2D to 3D »
Song Han -
2019 : Posters and Coffee »
Sameer Kumar · Tomasz Kornuta · Oleg Bakhteev · Hui Guan · Xiaomeng Dong · Minsik Cho · Sören Laue · Theodoros Vasiloudis · Andreea Anghel · Erik Wijmans · Zeyuan Shang · Oleksii Kuchaiev · Ji Lin · Susan Zhang · Ligeng Zhu · Beidi Chen · Vinu Joseph · Jialin Ding · Jonathan Raiman · Ahnjae Shin · Vithursan Thangarasa · Anush Sankaran · Akhil Mathur · Martino Dazzi · Markus Löning · Darryl Ho · Emanuel Zgraggen · Supun Nakandala · Tomasz Kornuta · Rita Kuznetsova -
2019 Poster: Park: An Open Platform for Learning-Augmented Computer Systems »
Hongzi Mao · Parimarjan Negi · Akshay Narayan · Hanrui Wang · Jiacheng Yang · Haonan Wang · Ryan Marcus · Ravichandra Addanki · Mehrdad Khani Shirkoohi · Songtao He · Vikram Nathan · Frank Cangialosi · Shaileshh Venkatakrishnan · Wei-Hung Weng · Song Han · Tim Kraska · Dr.Mohammad Alizadeh -
2019 Poster: Point-Voxel CNN for Efficient 3D Deep Learning »
Zhijian Liu · Haotian Tang · Yujun Lin · Song Han -
2019 Spotlight: Point-Voxel CNN for Efficient 3D Deep Learning »
Zhijian Liu · Haotian Tang · Yujun Lin · Song Han -
2018 : Panel disucssion »
Max Welling · Tim Genewein · Edwin Park · Song Han -
2018 : Prof. Song Han »
Song Han -
2018 : Bandwidth efficient deep learning by model compression »
Song Han -
2018 Poster: Learning to Exploit Stability for 3D Scene Parsing »
Yilun Du · Zhijian Liu · Hector Basevi · Ales Leonardis · Bill Freeman · Josh Tenenbaum · Jiajun Wu