Timezone: »
Poster
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
Lue Tao · Lei Feng · Jinfeng Yi · Sheng-Jun Huang · Songcan Chen
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples. By formalizing this malicious attack as finding the worst-case training data within a specific $\infty$-Wasserstein ball, we show that minimizing adversarial risk on the perturbed data is equivalent to optimizing an upper bound of natural risk on the original data. This implies that adversarial training can serve as a principled defense against delusive attacks. Thus, the test accuracy decreased by delusive attacks can be largely recovered by adversarial training. To further understand the internal mechanism of the defense, we disclose that adversarial training can resist the delusive perturbations by preventing the learner from overly relying on non-robust features in a natural setting. Finally, we complement our theoretical findings with a set of experiments on popular benchmark datasets, which show that the defense withstands six different practical attacks. Both theoretical and empirical results vote for adversarial training when confronted with delusive adversaries.
Author Information
Lue Tao (Nanjing University of Aeronautics and Astronautics)
Lei Feng (Nanyang Technological University)
Jinfeng Yi (JD AI Research)
Sheng-Jun Huang (Nanjing University of Aeronautics and Astronautics)
Songcan Chen (Nanjing University of Aeronautics and Astronautics)
More from the Same Authors
-
2021 Poster: Multi-Label Learning with Pairwise Relevance Ordering »
Ming-Kun Xie · Sheng-Jun Huang -
2021 Poster: Rethinking Calibration of Deep Neural Networks: Do Not Be Afraid of Overconfidence »
Deng-Bao Wang · Lei Feng · Min-Ling Zhang -
2021 Poster: Open-set Label Noise Can Improve Robustness Against Inherent Label Noise »
Hongxin Wei · Lue Tao · RENCHUNZI XIE · Bo An -
2021 Poster: Fast Certified Robust Training with Short Warmup »
Zhouxing Shi · Yihan Wang · Huan Zhang · Jinfeng Yi · Cho-Jui Hsieh -
2020 Poster: Provably Consistent Partial-Label Learning »
Lei Feng · Jiaqi Lv · Bo Han · Miao Xu · Gang Niu · Xin Geng · Bo An · Masashi Sugiyama -
2018 Poster: Adaptive Negative Curvature Descent with Applications in Non-convex Optimization »
Mingrui Liu · Zhe Li · Xiaoyu Wang · Jinfeng Yi · Tianbao Yang -
2017 Poster: Scalable Demand-Aware Recommendation »
Jinfeng Yi · Cho-Jui Hsieh · Kush Varshney · Lijun Zhang · Yao Li -
2017 Poster: Improved Dynamic Regret for Non-degenerate Functions »
Lijun Zhang · Tianbao Yang · Jinfeng Yi · Rong Jin · Zhi-Hua Zhou -
2012 Poster: Semi-Crowdsourced Clustering: Generalizing Crowd Labeling by Robust Distance Metric Learning »
Jinfeng Yi · Rong Jin · Anil K Jain · Shaili Jain