Timezone: »

 
Poster
Efficient Formal Safety Analysis of Neural Networks
Shiqi Wang · Kexin Pei · Justin Whitehouse · Junfeng Yang · Suman Jana

Tue Dec 04 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #157

Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crash. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different safety properties and find concrete counterexamples for networks that are 10x larger than the ones supported by existing analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.

Author Information

Shiqi Wang (Columbia University)
Kexin Pei (Columbia University)

I am a fifth-year Ph.D. student at Department of Computer Science, Columbia University. I am advised by Suman Jana and Junfeng Yang. Before coming to Columbia, I obtained the research-based master at Department of Computer Science, Purdue University, advised by Dongyan Xu, Xiangyu Zhang and Luo Si. Prior to Purdue, I worked at the Database group, HKBU, advised by Haibo Hu and Jianliang Xu. I am broadly interested in Security, Systems, and Machine Learning. I am currently deeply excited about developing neural frameworks and architectures to understand program semantics and using them for program analysis and security.

Justin Whitehouse (Columbia University)
Junfeng Yang (Columbia University)
Suman Jana (Columbia University)

More from the Same Authors