Timezone: »

Adversarial Robustness is at Odds with Lazy Training
Yunjuan Wang · Enayat Ullah · Poorya Mianjy · Raman Arora

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #723

Recent works show that adversarial examples exist for random neural networks [Daniely and Schacham, 2020] and that these examples can be found using a single step of gradient ascent [Bubeck et al., 2021]. In this work, we extend this line of work to ``lazy training'' of neural networks -- a dominant model in deep learning theory in which neural networks are provably efficiently learnable. We show that over-parametrized neural networks that are guaranteed to generalize well and enjoy strong computational guarantees remain vulnerable to attacks generated using a single step of gradient ascent.

Author Information

Yunjuan Wang (Johns Hopkins University)
Enayat Ullah (Johns Hopkins University)
Poorya Mianjy (Johns Hopkins University)
Raman Arora (Johns Hopkins University)

More from the Same Authors