Timezone: »

Evaluating Robust Perceptual Losses for Image Reconstruction
Tobias Uelwer · Felix Michels · Oliver De Candido
Event URL: https://openreview.net/forum?id=Jxn-jKvml4 »

Nowadays, many deep neural networks (DNNs) for image reconstructing tasks are trained using a combination of pixel-wise loss functions and perceptual image losses like learned perceptual image patch similarity (LPIPS). As these perceptual image losses compare the features of a pre-trained DNN, it is unsurprising that they are vulnerable to adversarial examples. It is known that: (i) DNNs can be robustified against adversarial examples using adversarial training, and (ii) adversarial examples are imperceptible by the human eye. Thus, we hypothesize that perceptual metrics, based on a robustly trained DNN, are more aligned with human perception than those based on non-robust models. Our extensive experiments on an image super resolution task show, however, that this is not the case. We observe that models trained with a robust perceptual loss tend to produce more artifacts in the reconstructed image. Furthermore, we were unable to find reliable image similarity metrics or evaluation methods to quantify these observations (which are known open problems).

Author Information

Tobias Uelwer (Technical University of Dortmund)
Felix Michels (Heinrich-Heine Universität Düsseldorf)
Oliver De Candido (Technical University Munich)

More from the Same Authors