Poster
A Spectral View of Adversarially Robust Features
Shivam Garg · Vatsal Sharan · Brian Zhang · Gregory Valiant
Room 517 AB #156
Keywords: [ Spectral Methods ] [ Learning Theory ] [ Deep Learning ] [ Algorithms ] [ Privacy, Anonymity, and Security ]
Given the apparent difficulty of learning models that are robust to adversarial perturbations, we propose tackling the simpler problem of developing adversarially robust features. Specifically, given a dataset and metric of interest, the goal is to return a function (or multiple functions) that 1) is robust to adversarial perturbations, and 2) has significant variation across the datapoints. We establish strong connections between adversarially robust features and a natural spectral property of the geometry of the dataset and metric of interest. This connection can be leveraged to provide both robust features, and a lower bound on the robustness of any function that has significant variance across the dataset. Finally, we provide empirical evidence that the adversarially robust features given by this spectral approach can be fruitfully leveraged to learn a robust (and accurate) model.
Live content is unavailable. Log in and register to view live content