Skip to yearly menu bar Skip to main content


Poster

Constructing Unrestricted Adversarial Examples with Generative Models

Yang Song · Rui Shu · Nate Kushman · Stefano Ermon

Room 517 AB #149

Keywords: [ Privacy, Anonymity, and Security ] [ Generative Models ] [ Adversarial Networks ]


Abstract:

Adversarial examples are typically constructed by perturbing an existing data point within a small matrix norm, and current defense methods are focused on guarding against this type of attack. In this paper, we propose a new class of adversarial examples that are synthesized entirely from scratch using a conditional generative model, without being restricted to norm-bounded perturbations. We first train an Auxiliary Classifier Generative Adversarial Network (AC-GAN) to model the class-conditional distribution over data samples. Then, conditioned on a desired class, we search over the AC-GAN latent space to find images that are likely under the generative model and are misclassified by a target classifier. We demonstrate through human evaluation that these new kind of adversarial images, which we call Generative Adversarial Examples, are legitimate and belong to the desired class. Our empirical results on the MNIST, SVHN, and CelebA datasets show that generative adversarial examples can bypass strong adversarial training and certified defense methods designed for traditional adversarial attacks.

Live content is unavailable. Log in and register to view live content