Timezone: »

 
Poster
Unlabeled Data Improves Adversarial Robustness
Yair Carmon · Aditi Raghunathan · Ludwig Schmidt · John Duchi · Percy Liang

Wed Dec 11 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #34
We demonstrate, theoretically and empirically, that adversarial robustness can significantly benefit from semisupervised learning. Theoretically, we revisit the simple Gaussian model of Schmidt et al. that shows a sample complexity gap between standard and robust classification. We prove that unlabeled data bridges this gap: a simple semisupervised learning procedure (self-training) achieves high robust accuracy using the same number of labels required for achieving high standard accuracy. Empirically, we augment CIFAR-10 with 500K unlabeled images sourced from 80 Million Tiny Images and use robust self-training to outperform state-of-the-art robust accuracies by over 5 points in (i) $\ell_\infty$ robustness against several strong attacks via adversarial training and (ii) certified $\ell_2$ and $\ell_\infty$ robustness via randomized smoothing. On SVHN, adding the dataset's own extra training set with the labels removed provides gains of 4 to 10 points, within 1 point of the gain from using the extra labels.

Author Information

Yair Carmon (Stanford University)
Aditi Raghunathan (Stanford University)
Ludwig Schmidt (UC Berkeley)
John Duchi (Stanford)
Percy Liang (Stanford University)

More from the Same Authors