Skip to yearly menu bar Skip to main content


Poster

Learning Bregman Divergences with Application to Robustness

Mohamed-Hicham LEGHETTAS · Markus PĆ¼schel

West Ballroom A-D #7208
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: We propose a novel and general method to learn Bregman divergences from raw high-dimensional data that measure similarity between images in pixel space. As a prototypical application, we learn divergences that consider real-world corruptions of images (e.g., blur) as close to the original and noisy perturbations as far, even if in $L^p$-distance the opposite holds. We also show that the learned Bregman divergence excels on datasets of human perceptual similarity judgment, suggesting its utility in a range of applications. We then define adversarial attacks by replacing the projected gradient descent (PGD) with the mirror descent associated with the learned Bregman divergence, and use them to improve the state-of-the-art in robustness through adversarial training for common image corruptions. In particular, for the contrast corruption that was found problematic in prior work we achieve an accuracy that exceeds the $L^p$- and the LPIPS-based adversarially trained neural networks by a margin of 27.16\% on the CIFAR-10-C corruption data set.

Live content is unavailable. Log in and register to view live content