Timezone: »

 
Poster
Deep Defense: Training DNNs with Improved Adversarial Robustness
Ziang Yan · Yiwen Guo · Changshui Zhang

Tue Dec 04 07:45 AM -- 09:45 AM (PST) @ Room 210 #28

Despite the efficacy on a variety of computer vision tasks, deep neural networks (DNNs) are vulnerable to adversarial attacks, limiting their applications in security-critical systems. Recent works have shown the possibility of generating imperceptibly perturbed image inputs (a.k.a., adversarial examples) to fool well-trained DNN classifiers into making arbitrary predictions. To address this problem, we propose a training recipe named "deep defense". Our core idea is to integrate an adversarial perturbation-based regularizer into the classification objective, such that the obtained models learn to resist potential attacks, directly and precisely. The whole optimization problem is solved just like training a recursive network. Experimental results demonstrate that our method outperforms training with adversarial/Parseval regularizations by large margins on various datasets (including MNIST, CIFAR-10 and ImageNet) and different DNN architectures. Code and models for reproducing our results are available at https://github.com/ZiangYan/deepdefense.pytorch.

Author Information

Ziang Yan (Automation Department, Tsinghua University)
Yiwen Guo (Intel Labs China)
Changshui Zhang (Tsinghua University)

More from the Same Authors