Timezone: »
On the Feasibility of Compressing Certifiably Robust Neural Networks
Pratik Vaishnavi · Veena Krish · Farhan Ahmed · Kevin Eykholt · Amir Rahmati
Event URL: https://openreview.net/forum?id=YzPaQcK2Ko4 »
Knowledge distillation is a popular approach to compress high-performance neural networks for use in resource-constrained environments. However, the threat of adversarial machine learning poses the question: Is it possible to compress adversarially robust networks and achieve similar or better adversarial robustness as the original network? In this paper, we explore this question with respect to $\textit{certifiable robustness defenses}$, in which the defense establishes a formal robustness guarantee irrespective of the adversarial attack methodology. We present our preliminary findings answering two main questions: 1) Is the traditional knowledge distillation sufficient to compress certifiably robust neural networks? and 2) What aspects of the transfer process can we modify to improve the compression effectiveness? Our work represents the first study of the interaction between machine learning model compression and certifiable robustness.
Knowledge distillation is a popular approach to compress high-performance neural networks for use in resource-constrained environments. However, the threat of adversarial machine learning poses the question: Is it possible to compress adversarially robust networks and achieve similar or better adversarial robustness as the original network? In this paper, we explore this question with respect to $\textit{certifiable robustness defenses}$, in which the defense establishes a formal robustness guarantee irrespective of the adversarial attack methodology. We present our preliminary findings answering two main questions: 1) Is the traditional knowledge distillation sufficient to compress certifiably robust neural networks? and 2) What aspects of the transfer process can we modify to improve the compression effectiveness? Our work represents the first study of the interaction between machine learning model compression and certifiable robustness.
Author Information
Pratik Vaishnavi (Stony Brook University)
Veena Krish (Stony Brook University)
Farhan Ahmed (International Business Machines)
Kevin Eykholt (International Business Machines)
Amir Rahmati (Stony Brook University)
More from the Same Authors
-
2022 : Benchmarking the Effect of Poisoning Defenses on the Security and Bias of the Final Model »
Nathalie Baracaldo · Kevin Eykholt · Farhan Ahmed · Yi Zhou · Shriti Priya · Taesung Lee · Swanand Kadhe · Yusong Tan · Sridevi Polavaram · Sterling Suggs -
2022 Spotlight: Lightning Talks 5B-2 »
Conglong Li · Mohammad Azizmalayeri · Mojan Javaheripi · Pratik Vaishnavi · Jon Hasselgren · Hao Lu · Kevin Eykholt · Arshia Soltani Moakhar · Wenze Liu · Gustavo de Rosa · Nikolai Hofmann · Minjia Zhang · Zixuan Ye · Jacob Munkberg · Amir Rahmati · Arman Zarei · Subhabrata Mukherjee · Yuxiong He · Shital Shah · Reihaneh Zohrabi · Hongtao Fu · Tomasz Religa · Yuliang Liu · Mohammad Manzuri · Mohammad Hossein Rohban · Zhiguo Cao · Caio Cesar Teodoro Mendes · Sebastien Bubeck · Farinaz Koushanfar · Debadeepta Dey -
2022 Spotlight: Accelerating Certified Robustness Training via Knowledge Transfer »
Pratik Vaishnavi · Kevin Eykholt · Amir Rahmati -
2022 Poster: Accelerating Certified Robustness Training via Knowledge Transfer »
Pratik Vaishnavi · Kevin Eykholt · Amir Rahmati