Timezone: »
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We demonstrate that 13 defenses recently published at ICLR, ICML and NeurIPS---and which illustrate a diverse set of defense strategies---can be circumvented despite attempting to perform evaluations using adaptive attacks.
While prior evaluation papers focused mainly on the end result---showing that a defense was ineffective---this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. Some of our attack strategies are generalizable, but no single strategy would have been sufficient for all defenses. This underlines our key message that adaptive attacks cannot be automated and always require careful and appropriate tuning to a given defense. We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples, and thus will allow the community to make further progress in building more robust models.
Author Information
Florian Tramer (Stanford University)
Nicholas Carlini (Google)
Wieland Brendel (University of Tübingen)
Aleksander Madry (MIT)
Aleksander Madry is the NBX Associate Professor of Computer Science in the MIT EECS Department and a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 2011 and, prior to joining the MIT faculty, he spent some time at Microsoft Research New England and on the faculty of EPFL. Aleksander's research interests span algorithms, continuous optimization, science of deep learning and understanding machine learning from a robustness perspective. His work has been recognized with a number of awards, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, an ACM Doctoral Dissertation Award Honorable Mention, and 2018 Presburger Award.
More from the Same Authors
-
2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
Xuechen (Chen) Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto -
2022 : A Unified Framework for Comparing Learning Algorithms »
Harshay Shah · Sung Min Park · Andrew Ilyas · Aleksander Madry -
2023 Poster: Students Parrot Their Teachers: Membership Inference on Model Distillation »
Matthew Jagielski · Milad Nasr · Katherine Lee · Christopher Choquette-Choo · Nicholas Carlini · Florian Tramer -
2023 Poster: Effective Robustness against Natural Distribution Shifts for Models with Different Training Data »
Zhouxing Shi · Nicholas Carlini · Ananth Balashankar · Ludwig Schmidt · Cho-Jui Hsieh · Alex Beutel · Yao Qin -
2023 Poster: Are aligned neural networks adversarially aligned? »
Nicholas Carlini · Florian Tramer · Daphne Ippolito · Ludwig Schmidt · Milad Nasr · Matthew Jagielski · Pang Wei Koh · Irena Gao · Christopher Choquette-Choo -
2023 Poster: Counterfactual Memorization in Neural Language Models »
Chiyuan Zhang · Daphne Ippolito · Katherine Lee · Matthew Jagielski · Florian Tramer · Nicholas Carlini -
2023 Oral: Students Parrot Their Teachers: Membership Inference on Model Distillation »
Matthew Jagielski · Milad Nasr · Katherine Lee · Christopher Choquette-Choo · Nicholas Carlini · Florian Tramer -
2022 Workshop: Workshop on Machine Learning Safety »
Dan Hendrycks · Victoria Krakovna · Dawn Song · Jacob Steinhardt · Nicholas Carlini -
2022 : Invited Talk: Aleksander Mądry »
Aleksander Madry -
2022 Poster: Handcrafted Backdoors in Deep Neural Networks »
Sanghyun Hong · Nicholas Carlini · Alexey Kurakin -
2022 Poster: Increasing Confidence in Adversarial Robustness Evaluations »
Roland S. Zimmermann · Wieland Brendel · Florian Tramer · Nicholas Carlini -
2022 Poster: The Privacy Onion Effect: Memorization is Relative »
Nicholas Carlini · Matthew Jagielski · Chiyuan Zhang · Nicolas Papernot · Andreas Terzis · Florian Tramer -
2022 Poster: Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples »
Maura Pintor · Luca Demetrio · Angelo Sotgiu · Ambra Demontis · Nicholas Carlini · Battista Biggio · Fabio Roli -
2022 Poster: 3DB: A Framework for Debugging Computer Vision Models »
Guillaume Leclerc · Hadi Salman · Andrew Ilyas · Sai Vemprala · Logan Engstrom · Vibhav Vineet · Kai Xiao · Pengchuan Zhang · Shibani Santurkar · Greg Yang · Ashish Kapoor · Aleksander Madry -
2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
Xuechen (Chen) Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto -
2021 : Discussion: Aleksander Mądry, Ernest Mwebaze, Suchi Saria »
Aleksander Madry · Ernest Mwebaze · Suchi Saria -
2021 : ML Model Debugging: A Data Perspective »
Aleksander Madry -
2021 Poster: Antipodes of Label Differential Privacy: PATE and ALIBI »
Mani Malek Esmaeili · Ilya Mironov · Karthik Prasad · Igor Shilov · Florian Tramer -
2021 Poster: Unadversarial Examples: Designing Objects for Robust Vision »
Hadi Salman · Andrew Ilyas · Logan Engstrom · Sai Vemprala · Aleksander Madry · Ashish Kapoor -
2021 Poster: Editing a classifier by rewriting its prediction rules »
Shibani Santurkar · Dimitris Tsipras · Mahalaxmi Elango · David Bau · Antonio Torralba · Aleksander Madry -
2020 : What Do Our Models Learn? »
Aleksander Madry -
2020 Poster: Measuring Robustness to Natural Distribution Shifts in Image Classification »
Rohan Taori · Achal Dave · Vaishaal Shankar · Nicholas Carlini · Benjamin Recht · Ludwig Schmidt -
2020 Poster: Do Adversarially Robust ImageNet Models Transfer Better? »
Hadi Salman · Andrew Ilyas · Logan Engstrom · Ashish Kapoor · Aleksander Madry -
2020 Spotlight: Measuring Robustness to Natural Distribution Shifts in Image Classification »
Rohan Taori · Achal Dave · Vaishaal Shankar · Nicholas Carlini · Benjamin Recht · Ludwig Schmidt -
2020 Oral: Do Adversarially Robust ImageNet Models Transfer Better? »
Hadi Salman · Andrew Ilyas · Logan Engstrom · Ashish Kapoor · Aleksander Madry -
2019 Workshop: Machine Learning with Guarantees »
Ben London · Gintare Karolina Dziugaite · Daniel Roy · Thorsten Joachims · Aleksander Madry · John Shawe-Taylor -
2019 Poster: Adversarial Training and Robustness for Multiple Perturbations »
Florian Tramer · Dan Boneh -
2019 Spotlight: Adversarial Training and Robustness for Multiple Perturbations »
Florian Tramer · Dan Boneh -
2019 Poster: Image Synthesis with a Single (Robust) Classifier »
Shibani Santurkar · Andrew Ilyas · Dimitris Tsipras · Logan Engstrom · Brandon Tran · Aleksander Madry -
2019 Poster: Adversarial Examples Are Not Bugs, They Are Features »
Andrew Ilyas · Shibani Santurkar · Dimitris Tsipras · Logan Engstrom · Brandon Tran · Aleksander Madry -
2019 Spotlight: Adversarial Examples Are Not Bugs, They Are Features »
Andrew Ilyas · Shibani Santurkar · Dimitris Tsipras · Logan Engstrom · Brandon Tran · Aleksander Madry -
2018 : Contributed talk 6: Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware »
Florian Tramer -
2018 : Adversarial Vision Challenge: Shooting ML Models in the Dark: The Landscape of Blackbox Attacks »
Aleksander Madry -
2018 Workshop: Workshop on Security in Machine Learning »
Nicolas Papernot · Jacob Steinhardt · Matt Fredrikson · Kamalika Chaudhuri · Florian Tramer -
2018 Poster: Spectral Signatures in Backdoor Attacks »
Brandon Tran · Jerry Li · Aleksander Madry -
2018 Poster: How Does Batch Normalization Help Optimization? »
Shibani Santurkar · Dimitris Tsipras · Andrew Ilyas · Aleksander Madry -
2018 Poster: Adversarially Robust Generalization Requires More Data »
Ludwig Schmidt · Shibani Santurkar · Dimitris Tsipras · Kunal Talwar · Aleksander Madry -
2018 Oral: How Does Batch Normalization Help Optimization? »
Shibani Santurkar · Dimitris Tsipras · Andrew Ilyas · Aleksander Madry -
2018 Spotlight: Adversarially Robust Generalization Requires More Data »
Ludwig Schmidt · Shibani Santurkar · Dimitris Tsipras · Kunal Talwar · Aleksander Madry -
2018 Tutorial: Adversarial Robustness: Theory and Practice »
J. Zico Kolter · Aleksander Madry