Timezone: »
In safety-critical but computationally resource-constrained applications, deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size (often millions of parameters). While the research community has extensively explored the use of robust training and network pruning \emph{independently} to address one of these challenges, only a few recent works have studied them jointly. However, these works inherit a heuristic pruning strategy that was developed for benign training, which performs poorly when integrated with robust training techniques, including adversarial training and verifiable robust training. To overcome this challenge, we propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune. We realize this insight by formulating the pruning objective as an empirical risk minimization problem which is solved efficiently using SGD. We demonstrate that our approach, titled HYDRA, achieves compressed networks with \textit{state-of-the-art} benign and robust accuracy, \textit{simultaneously}. We demonstrate the success of our approach across CIFAR-10, SVHN, and ImageNet dataset with four robust training techniques: iterative adversarial training, randomized smoothing, MixTrain, and CROWN-IBP. We also demonstrate the existence of highly robust sub-networks within non-robust networks.
Author Information
Vikash Sehwag (Princeton University)
Shiqi Wang (Columbia)
Prateek Mittal (Princeton University)
Suman Jana (Columbia University)
More from the Same Authors
-
2021 : RobustBench: a standardized adversarial robustness benchmark »
Francesco Croce · Maksym Andriushchenko · Vikash Sehwag · Edoardo Debenedetti · Nicolas Flammarion · Mung Chiang · Prateek Mittal · Matthias Hein -
2021 : A Novel Self-Distillation Architecture to Defeat Membership Inference Attacks »
Xinyu Tang · Saeed Mahloujifar · Liwei Song · Virat Shejwalkar · Amir Houmansadr · Prateek Mittal -
2022 : Lower Bounds on 0-1 Loss for Multi-class Classification with a Test-time Attacker »
Sihui Dai · Wenxin Ding · Arjun Nitin Bhagoji · Daniel Cullina · Prateek Mittal · Ben Zhao -
2022 Poster: Formulating Robustness Against Unforeseen Attacks »
Sihui Dai · Saeed Mahloujifar · Prateek Mittal -
2022 Poster: Understanding Robust Learning through the Lens of Representation Similarities »
Christian Cianfarani · Arjun Nitin Bhagoji · Vikash Sehwag · Ben Zhao · Heather Zheng · Prateek Mittal -
2022 Poster: General Cutting Planes for Bound-Propagation-Based Neural Network Verification »
Huan Zhang · Shiqi Wang · Kaidi Xu · Linyi Li · Bo Li · Suman Jana · Cho-Jui Hsieh · J. Zico Kolter -
2022 Poster: Renyi Differential Privacy of Propose-Test-Release and Applications to Private and Robust Machine Learning »
Jiachen T. Wang · Saeed Mahloujifar · Shouda Wang · Ruoxi Jia · Prateek Mittal -
2021 Poster: Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification »
Shiqi Wang · Huan Zhang · Kaidi Xu · Xue Lin · Suman Jana · Cho-Jui Hsieh · J. Zico Kolter -
2020 Poster: Ensuring Fairness Beyond the Training Data »
Debmalya Mandal · Samuel Deng · Suman Jana · Jeannette Wing · Daniel Hsu -
2019 Poster: Lower Bounds on Adversarial Robustness from Optimal Transport »
Arjun Nitin Bhagoji · Daniel Cullina · Prateek Mittal -
2018 Poster: PAC-learning in the presence of adversaries »
Daniel Cullina · Arjun Nitin Bhagoji · Prateek Mittal -
2018 Poster: Efficient Formal Safety Analysis of Neural Networks »
Shiqi Wang · Kexin Pei · Justin Whitehouse · Junfeng Yang · Suman Jana