Timezone: »

On Feature Learning in the Presence of Spurious Correlations
Pavel Izmailov · Polina Kirichenko · Nate Gruver · Andrew Wilson

Tue Nov 29 02:00 PM -- 04:00 PM (PST) @ Hall J #103

Deep classifiers are known to rely on spurious features — patterns which are correlated with the target on the training data but not inherently relevant to the learning problem, such as the image backgrounds when classifying the foregrounds. In this paper we evaluate the amount of information about the core (non-spurious) features that can be decoded from the representations learned by standard empirical risk minimization (ERM) and specialized group robustness training. Following recent work on Deep Feature Reweighting (DFR), we evaluate the feature representations by re-training the last layer of the model on a held-out set where the spurious correlation is broken. On multiple vision and NLP problems, we show that the features learned by simple ERM are highly competitive with the features learned by specialized group robustness methods targeted at reducing the effect of spurious correlations. Moreover, we show that the quality of learned feature representations is greatly affected by the design decisions beyond the training method, such as the model architecture and pre-training strategy. On the other hand, we find that strong regularization is not necessary for learning high-quality feature representations.Finally, using insights from our analysis, we significantly improve upon the best results reported in the literature on the popular Waterbirds, CelebA hair color prediction and WILDS-FMOW problems, achieving 97\%, 92\% and 50\% worst-group accuracies, respectively.

Author Information

Pavel Izmailov (New York University)
Polina Kirichenko (New York University)
Nate Gruver (New York University)
Andrew Wilson (New York University)
Andrew Wilson

I am a professor of machine learning at New York University.

More from the Same Authors