We show that label noise exists in adversarial training. Such label noise is due to the mismatch between the true label distribution of adversarial examples and the label inherited from clean examples – the true label distribution is distorted by the adversarial perturbation, but is neglected by the common practice that inherits labels from clean examples. Recognizing label noise sheds insights on the prevalence of robust overfitting in adversarial training, and explains its intriguing dependence on perturbation radius and data quality. Also, our label noise perspective aligns well with our observations of the epoch-wise double descent in adversarial training. Guided by our analyses, we proposed a method to automatically calibrate the label to address the label noise and robust overfitting. Our method achieves consistent performance improvements across various models and datasets without introducing new hyper-parameters or additional tuning.
Chengyu Dong (University of California, San Diego)
Liyuan Liu (University of Illinois, Urbana Champaign)
Jingbo Shang (University of California, San Diego)
More from the Same Authors
2022 Panel: Panel 3A-1: Double Bubble, Toil… & Label Noise in… »
Andrew Cullen · Chengyu Dong