Timezone: »

Constructing Unrestricted Adversarial Examples with Generative Models
Yang Song · Rui Shu · Nate Kushman · Stefano Ermon

Wed Dec 05 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #149

Adversarial examples are typically constructed by perturbing an existing data point within a small matrix norm, and current defense methods are focused on guarding against this type of attack. In this paper, we propose a new class of adversarial examples that are synthesized entirely from scratch using a conditional generative model, without being restricted to norm-bounded perturbations. We first train an Auxiliary Classifier Generative Adversarial Network (AC-GAN) to model the class-conditional distribution over data samples. Then, conditioned on a desired class, we search over the AC-GAN latent space to find images that are likely under the generative model and are misclassified by a target classifier. We demonstrate through human evaluation that these new kind of adversarial images, which we call Generative Adversarial Examples, are legitimate and belong to the desired class. Our empirical results on the MNIST, SVHN, and CelebA datasets show that generative adversarial examples can bypass strong adversarial training and certified defense methods designed for traditional adversarial attacks.

Author Information

Yang Song (Stanford University)
Rui Shu (Stanford University)
Nate Kushman (Microsoft Research Cambridge)
Stefano Ermon (Stanford)

More from the Same Authors