Timezone: »
Poster
Toward Efficient Robust Training against Union of $\ell_p$ Threat Models
Gaurang Sriramanan · Maharshi Gor · Soheil Feizi
The overwhelming vulnerability of deep neural networks to carefully crafted perturbations known as adversarial attacks has led to the development of various training techniques to produce robust models. While the primary focus of existing approaches has been directed toward addressing the worst-case performance achieved under a single-threat model, it is imperative that safety-critical systems are robust with respect to multiple threat models simultaneously. Existing approaches that address worst-case performance under the union of such threat models ($\ell_{\infty}, \ell_2, \ell_1$) either utilize adversarial training methods that require multi-step attacks which are computationally expensive in practice, or rely upon fine-tuning of pre-trained models that are robust with respect to a single-threat model. In this work, we show that by carefully choosing the objective function used for robust training, it is possible to achieve similar, or improved worst-case performance over a union of threat models while utilizing only single-step attacks, thereby achieving a significant reduction in computational resources necessary for training. Furthermore, prior work showed that adversarial training specific to the $\ell_1$ threat model is relatively difficult, to the extent that even multi-step adversarially trained models were shown to be prone to gradient-masking. However, the proposed method—when applied on the $\ell_1$ threat model specifically—enables us to obtain the first $\ell_1$ robust model trained solely with single-step adversaries. Finally, to demonstrate the merits of our approach, we utilize a modern set of attack evaluations to better estimate the worst-case performance under the considered union of threat models.
Author Information
Gaurang Sriramanan (University of Maryland, College Park)
Maharshi Gor (University of Maryland, College Park)

Second Year CS PhD student at University of Maryland, College Park and Student Researcher at Google. Interested in Natural Language Understanding and Efficient Learning Methods.
Soheil Feizi (University of Maryland)
More from the Same Authors
-
2023 Poster: Exploring Geometry of Blind Spots in Vision models »
Sriram Balasubramanian · Gaurang Sriramanan · Vinu Sankar Sadasivan · Soheil Feizi -
2023 Poster: Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases »
Mazda Moayeri · Wenxiao Wang · Sahil Singla · Soheil Feizi -
2023 Poster: Temporal Robustness against Data poisoning »
Wenxiao Wang · Soheil Feizi -
2023 Poster: Diffused Redundancy in Pre-trained Representations »
Vedant Nanda · Till Speicher · John Dickerson · Krishna Gummadi · Soheil Feizi · Adrian Weller -
2022 Poster: Hard ImageNet: Segmentations for Objects with Strong Spurious Cues »
Mazda Moayeri · Sahil Singla · Soheil Feizi -
2022 Poster: Explicit Tradeoffs between Adversarial and Natural Distributional Robustness »
Mazda Moayeri · Kiarash Banihashem · Soheil Feizi -
2022 Poster: Lethal Dose Conjecture on Data Poisoning »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 Poster: Improved techniques for deterministic l2 robustness »
Sahil Singla · Soheil Feizi -
2021 Poster: Towards Efficient and Effective Adversarial Training »
Gaurang Sriramanan · Sravanti Addepalli · Arya Baburaj · Venkatesh Babu R -
2021 Poster: Improving Deep Learning Interpretability by Saliency Guided Training »
Aya Abdelsalam Ismail · Hector Corrada Bravo · Soheil Feizi -
2020 : Opening Remarks »
Reinhard Heckel · Paul Hand · Soheil Feizi · Lenka Zdeborová · Richard Baraniuk -
2020 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Lenka Zdeborová · Soheil Feizi -
2020 Poster: Certifying Confidence via Randomized Smoothing »
Aounon Kumar · Alexander Levine · Soheil Feizi · Tom Goldstein -
2020 Poster: Robust Optimal Transport with Applications in Generative Modeling and Domain Adaptation »
Yogesh Balaji · Rama Chellappa · Soheil Feizi -
2020 Poster: Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses »
Gaurang Sriramanan · Sravanti Addepalli · Arya Baburaj · Venkatesh Babu R -
2020 Spotlight: Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses »
Gaurang Sriramanan · Sravanti Addepalli · Arya Baburaj · Venkatesh Babu R -
2020 Poster: Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks »
Wei-An Lin · Chun Pong Lau · Alexander Levine · Rama Chellappa · Soheil Feizi -
2020 Poster: Benchmarking Deep Learning Interpretability in Time Series Predictions »
Aya Abdelsalam Ismail · Mohamed Gunady · Hector Corrada Bravo · Soheil Feizi -
2020 Poster: (De)Randomized Smoothing for Certifiable Defense against Patch Attacks »
Alexander Levine · Soheil Feizi -
2019 : Soheil Feizi, "Certifiable Defenses against Adversarial Attacks" »
Soheil Feizi -
2019 Poster: Functional Adversarial Attacks »
Cassidy Laidlaw · Soheil Feizi -
2019 Poster: Quantum Wasserstein Generative Adversarial Networks »
Shouvanik Chakrabarti · Huang Yiming · Tongyang Li · Soheil Feizi · Xiaodi Wu -
2019 Poster: Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks »
Aya Abdelsalam Ismail · Mohamed Gunady · Luiz Pessoa · Hector Corrada Bravo · Soheil Feizi -
2018 Poster: Porcupine Neural Networks: Approximating Neural Network Landscapes »
Soheil Feizi · Hamid Javadi · Jesse Zhang · David Tse -
2017 Poster: Tensor Biclustering »
Soheil Feizi · Hamid Javadi · David Tse -
2014 Poster: Biclustering Using Message Passing »
Luke O'Connor · Soheil Feizi