Timezone: »
Poster
Models Out of Line: A Fourier Lens on Distribution Shift Robustness
Sara Fridovich-Keil · Brian Bartoldson · James Diffenderfer · Bhavya Kailkhura · Timo Bremer
Improving the accuracy of deep neural networks on out-of-distribution (OOD) data is critical to an acceptance of deep learning in real world applications. It has been observed that accuracies on in-distribution (ID) versus OOD data follow a linear trend and models that outperform this baseline are exceptionally rare (and referred to as ``effectively robust”). Recently, some promising approaches have been developed to improve OOD robustness: model pruning, data augmentation, and ensembling or zero-shot evaluating large pretrained models. However, there still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness. We approach this issue by conducting a comprehensive empirical study of diverse approaches that are known to impact OOD robustness on a broad range of natural and synthetic distribution shifts of CIFAR-10 and ImageNet. In particular, we view the "effective robustness puzzle" through a Fourier lens and ask how spectral properties of both models and OOD data correlate with OOD robustness. We find this Fourier lens offers some insight into why certain robust models, particularly those from the CLIP family, achieve OOD robustness. However, our analysis also makes clear that no known metric is consistently the best explanation of OOD robustness. Thus, to aid future research into the OOD puzzle, we address the gap in publicly-available models with effective robustness by introducing a set of pretrained CIFAR-10 models---$RobustNets$---with varying levels of OOD robustness.
Author Information
Sara Fridovich-Keil (UC Berkeley)
Brian Bartoldson (Lawrence Livermore National Laboratory)
James Diffenderfer (Lawrence Livermore National Laboratory)
Bhavya Kailkhura (Lawrence Livermore National Laboratory)
Timo Bremer (Lawrence Livermore National Laboratory)
More from the Same Authors
-
2021 : Unsupervised Attribute Alignment for Characterizing Distribution Shift »
Matthew Olson · Rushil Anirudh · Jayaraman Thiagarajan · Timo Bremer · Weng-Keen Wong · Shusen Liu -
2021 : Geometric Priors for Scientific Generative Models in Inertial Confinement Fusion »
Ankita Shukla · Rushil Anirudh · Eugene Kur · Jayaraman Thiagarajan · Timo Bremer · Brian K Spears · Tammy Ma · Pavan Turaga -
2022 : Do Domain Generalization Methods Generalize Well? »
Akshay Mehra · Bhavya Kailkhura · Pin-Yu Chen · Jihun Hamm -
2023 Poster: Neural Image Compression: Generalization, Robustness, and Spectral Biases »
Kelsey Lieberman · James Diffenderfer · Charles Godfrey · Bhavya Kailkhura -
2022 Spotlight: Single Model Uncertainty Estimation via Stochastic Data Centering »
Jayaraman Thiagarajan · Rushil Anirudh · Vivek Sivaraman Narayanaswamy · Timo Bremer -
2022 Poster: Single Model Uncertainty Estimation via Stochastic Data Centering »
Jayaraman Thiagarajan · Rushil Anirudh · Vivek Sivaraman Narayanaswamy · Timo Bremer -
2022 Poster: When does dough become a bagel? Analyzing the remaining mistakes on ImageNet »
Vijay Vasudevan · Benjamin Caine · Raphael Gontijo Lopes · Sara Fridovich-Keil · Rebecca Roelofs -
2022 Poster: Spectral Bias in Practice: The Role of Function Frequency in Generalization »
Sara Fridovich-Keil · Raphael Gontijo Lopes · Rebecca Roelofs -
2021 Poster: G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators »
Yunhui Long · Boxin Wang · Zhuolin Yang · Bhavya Kailkhura · Aston Zhang · Carl Gunter · Bo Li -
2021 Poster: A Winning Hand: Compressing Deep Networks Can Improve Out-of-Distribution Robustness »
James Diffenderfer · Brian Bartoldson · Shreya Chaganti · Jize Zhang · Bhavya Kailkhura -
2021 Poster: Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning »
Akshay Mehra · Bhavya Kailkhura · Pin-Yu Chen · Jihun Hamm -
2020 Poster: A Statistical Mechanics Framework for Task-Agnostic Sample Design in Machine Learning »
Bhavya Kailkhura · Jayaraman Thiagarajan · Qunwei Li · Jize Zhang · Yi Zhou · Timo Bremer -
2020 Poster: Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains »
Matthew Tancik · Pratul Srinivasan · Ben Mildenhall · Sara Fridovich-Keil · Nithin Raghavan · Utkarsh Singhal · Ravi Ramamoorthi · Jonathan Barron · Ren Ng -
2020 Poster: Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond »
Kaidi Xu · Zhouxing Shi · Huan Zhang · Yihan Wang · Kai-Wei Chang · Minlie Huang · Bhavya Kailkhura · Xue Lin · Cho-Jui Hsieh -
2020 Spotlight: Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains »
Matthew Tancik · Pratul Srinivasan · Ben Mildenhall · Sara Fridovich-Keil · Nithin Raghavan · Utkarsh Singhal · Ravi Ramamoorthi · Jonathan Barron · Ren Ng -
2019 Poster: A Meta-Analysis of Overfitting in Machine Learning »
Becca Roelofs · Vaishaal Shankar · Benjamin Recht · Sara Fridovich-Keil · Moritz Hardt · John Miller · Ludwig Schmidt -
2018 Poster: Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization »
Sijia Liu · Bhavya Kailkhura · Pin-Yu Chen · Paishun Ting · Shiyu Chang · Lisa Amini