Skip to yearly menu bar Skip to main content


Poster
in
Workshop: I Can’t Believe It’s Not Better: Understanding Deep Learning Through Empirical Falsification

Evaluating Robust Perceptual Losses for Image Reconstruction

Tobias Uelwer · Felix Michels · Oliver De Candido


Abstract:

Nowadays, many deep neural networks (DNNs) for image reconstructing tasks are trained using a combination of pixel-wise loss functions and perceptual image losses like learned perceptual image patch similarity (LPIPS). As these perceptual image losses compare the features of a pre-trained DNN, it is unsurprising that they are vulnerable to adversarial examples. It is known that: (i) DNNs can be robustified against adversarial examples using adversarial training, and (ii) adversarial examples are imperceptible by the human eye. Thus, we hypothesize that perceptual metrics, based on a robustly trained DNN, are more aligned with human perception than those based on non-robust models. Our extensive experiments on an image super resolution task show, however, that this is not the case. We observe that models trained with a robust perceptual loss tend to produce more artifacts in the reconstructed image. Furthermore, we were unable to find reliable image similarity metrics or evaluation methods to quantify these observations (which are known open problems).

Chat is not available.