Timezone: »
Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crash. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different safety properties and find concrete counterexamples for networks that are 10x larger than the ones supported by existing analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.
Author Information
Shiqi Wang (Columbia University)
Kexin Pei (Columbia University)
I am a fifth-year Ph.D. student at Department of Computer Science, Columbia University. I am advised by Suman Jana and Junfeng Yang. Before coming to Columbia, I obtained the research-based master at Department of Computer Science, Purdue University, advised by Dongyan Xu, Xiangyu Zhang and Luo Si. Prior to Purdue, I worked at the Database group, HKBU, advised by Haibo Hu and Jianliang Xu. I am broadly interested in Security, Systems, and Machine Learning. I am currently deeply excited about developing neural frameworks and architectures to understand program semantics and using them for program analysis and security.
Justin Whitehouse (Columbia University)
Junfeng Yang (Columbia University)
Suman Jana (Columbia University)
More from the Same Authors
-
2022 Poster: General Cutting Planes for Bound-Propagation-Based Neural Network Verification »
Huan Zhang · Shiqi Wang · Kaidi Xu · Linyi Li · Bo Li · Suman Jana · Cho-Jui Hsieh · J. Zico Kolter -
2021 Poster: Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification »
Shiqi Wang · Huan Zhang · Kaidi Xu · Xue Lin · Suman Jana · Cho-Jui Hsieh · J. Zico Kolter -
2020 Poster: Ensuring Fairness Beyond the Training Data »
Debmalya Mandal · Samuel Deng · Suman Jana · Jeannette Wing · Daniel Hsu -
2020 Poster: HYDRA: Pruning Adversarially Robust Neural Networks »
Vikash Sehwag · Shiqi Wang · Prateek Mittal · Suman Jana -
2019 Poster: Metric Learning for Adversarial Robustness »
Chengzhi Mao · Ziyuan Zhong · Junfeng Yang · Carl Vondrick · Baishakhi Ray -
2017 : Poster Spotlights I »
Taesik Na · Yang Song · Aman Sinha · Richard Shin · Qiuyuan Huang · Nina Narodytska · Matt Staib · Kexin Pei · Fnu Suya · Amirata Ghorbani · Jacob Buckman · Matthias Hein · Huan Zhang · Yanjun Qi · Yuan Tian · Min Du · Dimitris Tsipras