Poster
Stability and Generalization of Adversarial Training for Shallow Neural Networks with Smooth Activation
Kaibo Zhang · Yunjuan Wang · Raman Arora
East Exhibit Hall A-C #2105
Adversarial training has emerged as a popular approach for training models that are robust to inference-time adversarial attacks. However, our theoretical understanding of why and when it works remains limited. Prior work has offered generalization analysis of adversarial training, but they are either restricted to the Neural Tangent Kernel (NTK) regime or they make restrictive assumptions about data such as (noisy) linear separability or robust realizability. In this work, we study the stability and generalization of adversarial training for two-layer networks without any data distribution assumptions and beyond the NTK regime. Our findings suggest that for networks with any given initialization and sufficiently large width, the generalization bound can be effectively controlled via early stopping. We further improve the generalization bound by leveraging smoothing using Moreau’s envelope.
Live content is unavailable. Log in and register to view live content