Adversarial Training is a Form of Data-dependent Operator Norm Regularization

Kevin Roth, Yannic Kilcher, Thomas Hofmann

Spotlight presentation: Orals & Spotlights Track 20: Social/Adversarial Learning
on 2020-12-09T07:20:00-08:00 - 2020-12-09T07:30:00-08:00
Poster Session 4 (more posters)
on 2020-12-09T09:00:00-08:00 - 2020-12-09T11:00:00-08:00
GatherTown: Adversarial Learning ( Town D1 - Spot A1 )
Join GatherTown
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Abstract: We establish a theoretical link between adversarial training and operator norm regularization for deep neural networks. Specifically, we prove that $l_p$-norm constrained projected gradient ascent based adversarial training with an $l_q$-norm loss on the logits of clean and perturbed inputs is equivalent to data-dependent (p, q) operator norm regularization. This fundamental connection confirms the long-standing argument that a network’s sensitivity to adversarial examples is tied to its spectral properties and hints at novel ways to robustify and defend against adversarial attacks. We provide extensive empirical evidence on state-of-the-art network architectures to support our theoretical results.

Preview Video and Chat

Chat is not available.