Timezone: »
Nowadays, many deep neural networks (DNNs) for image reconstructing tasks are trained using a combination of pixel-wise loss functions and perceptual image losses like learned perceptual image patch similarity (LPIPS). As these perceptual image losses compare the features of a pre-trained DNN, it is unsurprising that they are vulnerable to adversarial examples. It is known that: (i) DNNs can be robustified against adversarial examples using adversarial training, and (ii) adversarial examples are imperceptible by the human eye. Thus, we hypothesize that perceptual metrics, based on a robustly trained DNN, are more aligned with human perception than those based on non-robust models. Our extensive experiments on an image super resolution task show, however, that this is not the case. We observe that models trained with a robust perceptual loss tend to produce more artifacts in the reconstructed image. Furthermore, we were unable to find reliable image similarity metrics or evaluation methods to quantify these observations (which are known open problems).
Author Information
Tobias Uelwer (Technical University of Dortmund)
Felix Michels (Heinrich-Heine Universität Düsseldorf)
Oliver De Candido (Technical University Munich)
More from the Same Authors
-
2022 : Optimizing Intermediate Representations of Generative Models for Phase Retrieval »
Tobias Uelwer · Sebastian Konietzny · Stefan Harmeling -
2022 : Transformer-based World Models Are Happy With 100k Interactions »
Jan Robine · Marc Höftmann · Tobias Uelwer · Stefan Harmeling -
2023 Workshop: I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models »
Estefany Kelly Buchanan · Fan Feng · Andreas Kriegler · Ian Mason · Tobias Uelwer · Yubin Xie · Rui Yang