Timezone: »

 
Poster
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
Yi Zhang · Orestis Plevrakis · Simon Du · Xingguo Li · Zhao Song · Sanjeev Arora

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1594
Adversarial training is a popular method to give neural nets robustness against adversarial perturbations. In practice adversarial training leads to low robust training loss. However, a rigorous explanation for why this happens under natural conditions is still missing. Recently a convergence theory of standard (non-adversarial) supervised training was developed by various groups for {\em very overparametrized} nets. It is unclear how to extend these results to adversarial training because of the min-max objective. Recently, a first step towards this direction was made by Gao et al. using tools from online learning, but they require the width of the net to be \emph{exponential} in input dimension $d$, and with an unnatural activation function. Our work proves convergence to low robust training loss for \emph{polynomial} width instead of exponential, under natural assumptions and with ReLU activations. A key element of our proof is showing that ReLU networks near initialization can approximate the step function, which may be of independent interest.

Author Information

Yi Zhang (Princeton University)
Orestis Plevrakis (Princeton University)
Simon Du (Institute for Advanced Study)
Xingguo Li (Princeton University)
Zhao Song (IAS/Princeton)
Sanjeev Arora (Princeton University)

More from the Same Authors