Timezone: »
In the adversarial training framework of \cite{carmon2019unlabeled,gowal2021improving}, people use generated/real unlabeled data with pseudolabels to improve adversarial robustness. We provide statistical insights to explain why the artificially generated data improve adversarial training. In particular, we study how the attack strength and the quality of the unlabeled data affect adversarial robustness in this framework. Our results show that with a high-quality unlabeled data generator, adversarial training can benefit greatly from this framework under large attack strength, while a poor generator can still help to some extent. To make adaptions concerning the quality of generated data, we propose an algorithm that performs online adjustment to the weight between the labeled real data and the generated data, aiming to optimize the adversarial risk. Numerical studies are conducted to verify our theories and show the effectiveness of the proposed algorithm.
Author Information
Yue Xing (Purdue University)
Qifan Song (Purdue University )
Guang Cheng (University of California, Los Angeles)
More from the Same Authors
-
2022 Poster: Fair Bayes-Optimal Classifiers Under Predictive Parity »
Xianli Zeng · Edgar Dobriban · Guang Cheng -
2022 Poster: Phase Transition from Clean Training to Adversarial Training »
Yue Xing · Qifan Song · Guang Cheng -
2022 Poster: Support Recovery in Sparse PCA with Incomplete Data »
Hanbyul Lee · Qifan Song · Jean Honorio -
2021 Poster: On the Algorithmic Stability of Adversarial Training »
Yue Xing · Qifan Song · Guang Cheng -
2020 Poster: Efficient Variational Inference for Sparse Deep Learning with Theoretical Guarantee »
Jincheng Bai · Qifan Song · Guang Cheng -
2020 Poster: Directional Pruning of Deep Neural Networks »
Shih-Kang Chao · Zhanyu Wang · Yue Xing · Guang Cheng