Timezone: »

 
Poster
Understanding Robust Learning through the Lens of Representation Similarities
Christian Cianfarani · Arjun Nitin Bhagoji · Vikash Sehwag · Ben Zhao · Heather Zheng · Prateek Mittal

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #623

Representation learning, \textit{i.e.} the generation of representations useful for downstream applications, is a task of fundamental importance that underlies much of the success of deep neural networks (DNNs). Recently, \emph{robustness to adversarial examples} has emerged as a desirable property for DNNs, spurring the development of robust training methods that account for adversarialexamples. In this paper, we aim to understand how the properties of representations learned by robust training differ from those obtained from standard, non-robust training. This is critical to diagnosing numerous salient pitfalls in robust networks, such as, degradation of performance on benign inputs, poor generalization of robustness, and increase in over-fitting. We utilize a powerful set of tools known as representation similarity metrics, across 3 vision datasets, to obtain layer-wise comparisons between robust and non-robust DNNs with different architectures, training procedures and adversarial constraints. Our experiments highlight hitherto unseen properties of robust representations that we posit underlie the behavioral differences of robust networks. We discover a lack of specialization in robust networks' representations along with a disappearance of `block structure'. We also find overfitting during robust training largely impacts deeper layers. These, along with other findings, suggest ways forward for the design and training of better robust networks.

Author Information

Christian Cianfarani (University of Chicago)
Arjun Nitin Bhagoji (University of Chicago)
Vikash Sehwag (Princeton University)
Ben Zhao (University of Chicago)
Heather Zheng (University of Chicago)
Prateek Mittal (Princeton University)

More from the Same Authors