Timezone: »

A Closer Look at Accuracy vs. Robustness
Yao-Yuan Yang · Cyrus Rashtchian · Hongyang Zhang · Russ Salakhutdinov · Kamalika Chaudhuri

Tue Dec 08 09:00 PM -- 11:00 PM (PST) @ Poster Session 2 #667

Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning. We take a closer look at this phenomenon and first show that real image datasets are actually separated. With this property in mind, we then prove that robustness and accuracy should both be achievable for benchmark datasets through locally Lipschitz functions, and hence, there should be no inherent tradeoff between robustness and accuracy. Through extensive experiments with robustness methods, we argue that the gap between theory and practice arises from two limitations of current methods: either they fail to impose local Lipschitzness or they are insufficiently generalized. We explore combining dropout with robust training methods and obtain better generalization. We conclude that achieving robustness and accuracy in practice may require using methods that impose local Lipschitzness and augmenting them with deep learning generalization techniques.

Author Information

Yao-Yuan Yang (UCSD)
Cyrus Rashtchian (UCSD)

I am a senior research scientist at Google. I would on robustness, OOD generalization, and theoretical machine learning.

Hongyang Zhang (TTIC)
Russ Salakhutdinov (Carnegie Mellon University)
Kamalika Chaudhuri (UCSD)

More from the Same Authors