Timezone: »
Evaluating robustness of machine-learning models to adversarial examples is a challenging problem. Many defenses have been shown to provide a false sense of robustness by causing gradient-based attacks to fail, and they have been broken under more rigorous evaluations.Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in a systematic manner.In this work, we overcome these limitations by: (i) categorizing attack failures based on how they affect the optimization of gradient-based attacks, while also unveiling two novel failures affecting many popular attack implementations and past evaluations; (ii) proposing six novel \emph{indicators of failure}, to automatically detect the presence of such failures in the attack optimization process; and (iii) suggesting a systematic protocol to apply the corresponding fixes. Our extensive experimental analysis, involving more than 15 models in 3 distinct application domains, shows that our indicators of failure can be used to debug and improve current adversarial robustness evaluations, thereby providing a first concrete step towards automatizing and systematizing them. Our open-source code is available at: https://github.com/pralab/IndicatorsOfAttackFailure.
Author Information
Maura Pintor (University of Cagliari)
Luca Demetrio (University of Genoa)
Angelo Sotgiu (University of Cagliari)
Ambra Demontis (University of Cagliari)
Nicholas Carlini (Google)
Battista Biggio (University of Cagliari, Italy)
Fabio Roli (University of Cagliari)
More from the Same Authors
-
2021 : Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes »
Utku Ozbulak · Maura Pintor · Arnout Van Messem · Wesley De Neve -
2022 Workshop: Workshop on Machine Learning Safety »
Dan Hendrycks · Victoria Krakovna · Dawn Song · Jacob Steinhardt · Nicholas Carlini -
2022 Poster: Handcrafted Backdoors in Deep Neural Networks »
Sanghyun Hong · Nicholas Carlini · Alexey Kurakin -
2022 Poster: Increasing Confidence in Adversarial Robustness Evaluations »
Roland S. Zimmermann · Wieland Brendel · Florian Tramer · Nicholas Carlini -
2022 Poster: The Privacy Onion Effect: Memorization is Relative »
Nicholas Carlini · Matthew Jagielski · Chiyuan Zhang · Nicolas Papernot · Andreas Terzis · Florian Tramer -
2021 Poster: Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints »
Maura Pintor · Fabio Roli · Wieland Brendel · Battista Biggio -
2020 Poster: On Adaptive Attacks to Adversarial Example Defenses »
Florian Tramer · Nicholas Carlini · Wieland Brendel · Aleksander Madry -
2020 Poster: Measuring Robustness to Natural Distribution Shifts in Image Classification »
Rohan Taori · Achal Dave · Vaishaal Shankar · Nicholas Carlini · Benjamin Recht · Ludwig Schmidt -
2020 Spotlight: Measuring Robustness to Natural Distribution Shifts in Image Classification »
Rohan Taori · Achal Dave · Vaishaal Shankar · Nicholas Carlini · Benjamin Recht · Ludwig Schmidt