Timezone: »

Diffusion Visual Counterfactual Explanations
Maximilian Augustin · Valentyn Boreiko · Francesco Croce · Matthias Hein

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #229

Visual Counterfactual Explanations (VCEs) are an important tool to understand the decisions of an image classifier. They are “small” but “realistic” semantic changes of the image changing the classifier decision. Current approaches for the generation of VCEs are restricted to adversarially robust models and often contain non-realistic artefacts, or are limited to image classification problems with few classes. In this paper, we overcome this by generating Diffusion Visual Counterfactual Explanations (DVCEs) for arbitrary ImageNet classifiers via a diffusion process. Two modifications to the diffusion process are key for our DVCEs: first, an adaptive parameterization, whose hyperparameters generalize across images and models, together with distance regularization and late start of the diffusion process, allow us to generate images with minimal semantic changes to the original ones but different classification. Second, our cone regularization via an adversarially robust model ensures that the diffusion process does not converge to trivial non-semantic changes, but instead produces realistic images of the target class which achieve high confidence by the classifier.

Author Information

Maximilian Augustin (University of Tuebingen)
Valentyn Boreiko (Eberhard-Karls-Universität Tübingen)
Francesco Croce (University of Tübingen)
Matthias Hein (University of Tübingen)

More from the Same Authors