Timezone: »
Spotlight
Adversarial Training and Robustness for Multiple Perturbations
Florian Tramer · Dan Boneh
Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small $\ell_\infty$-noise). For other perturbations, these defenses offer no guarantees and, at times, even increase the model's vulnerability.
Our aim is to understand the reasons underlying this robustness trade-off, and to train models that are simultaneously robust to multiple perturbation types.
We prove that a trade-off in robustness to different types of $\ell_p$-bounded and spatial perturbations must exist in a natural and simple statistical setting.
We corroborate our formal analysis by demonstrating similar robustness trade-offs on MNIST and CIFAR10. We propose new multi-perturbation adversarial training schemes, as well as an efficient attack for the $\ell_1$-norm, and use these to show that models trained against multiple attacks fail to achieve robustness competitive with that of models trained on each attack individually.
In particular, we find that adversarial training with first-order $\ell_\infty, \ell_1$ and $\ell_2$ attacks on MNIST achieves merely $50\%$ robust accuracy, partly because of gradient-masking.
Finally, we propose affine attacks that linearly interpolate between perturbation types and further degrade the accuracy of adversarially trained models.
Author Information
Florian Tramer (Stanford University)
Dan Boneh (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Adversarial Training and Robustness for Multiple Perturbations »
Thu. Dec 12th 06:45 -- 08:45 PM Room East Exhibition Hall B + C #87
More from the Same Authors
-
2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
Xuechen (Chen) Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto -
2023 Poster: Students Parrot Their Teachers: Membership Inference on Model Distillation »
Matthew Jagielski · Milad Nasr · Katherine Lee · Christopher Choquette-Choo · Nicholas Carlini · Florian Tramer -
2023 Poster: Are aligned neural networks adversarially aligned? »
Nicholas Carlini · Florian Tramer · Daphne Ippolito · Ludwig Schmidt · Milad Nasr · Matthew Jagielski · Pang Wei Koh · Irena Gao · Christopher Choquette-Choo -
2023 Poster: Counterfactual Memorization in Neural Language Models »
Chiyuan Zhang · Daphne Ippolito · Katherine Lee · Matthew Jagielski · Florian Tramer · Nicholas Carlini -
2023 Oral: Students Parrot Their Teachers: Membership Inference on Model Distillation »
Matthew Jagielski · Milad Nasr · Katherine Lee · Christopher Choquette-Choo · Nicholas Carlini · Florian Tramer -
2022 Poster: Increasing Confidence in Adversarial Robustness Evaluations »
Roland S. Zimmermann · Wieland Brendel · Florian Tramer · Nicholas Carlini -
2022 Poster: The Privacy Onion Effect: Memorization is Relative »
Nicholas Carlini · Matthew Jagielski · Chiyuan Zhang · Nicolas Papernot · Andreas Terzis · Florian Tramer -
2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
Xuechen (Chen) Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto -
2021 Poster: Antipodes of Label Differential Privacy: PATE and ALIBI »
Mani Malek Esmaeili · Ilya Mironov · Karthik Prasad · Igor Shilov · Florian Tramer -
2020 : Invited Talk #4: Dan Boneh (Stanford University) »
Dan Boneh -
2020 Poster: On Adaptive Attacks to Adversarial Example Defenses »
Florian Tramer · Nicholas Carlini · Wieland Brendel · Aleksander Madry -
2018 : Contributed talk 6: Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware »
Florian Tramer -
2018 Workshop: Workshop on Security in Machine Learning »
Nicolas Papernot · Jacob Steinhardt · Matt Fredrikson · Kamalika Chaudhuri · Florian Tramer