Timezone: »

Dropout Disagreement: A Recipe for Group Robustness with Fewer Annotations
Tyler LaBonte · Abhishek Kumar · Vidya Muthukumar
Event URL: https://openreview.net/forum?id=3OxII8ZB3A »
Empirical risk minimization (ERM) of neural networks can cause over-reliance on spurious correlations and poor generalization on minority groups. Deep feature reweighting (DFR) improves group robustness via last-layer retraining, but it requires full group and class annotations for the reweighting dataset. To eliminate this impractical requirement, we propose a one-shot active learning method which constructs the reweighting dataset with the disagreement points between the ERM model with and without dropout activated. Our experiments show our approach achieves 95% of DFR performance on the Waterbirds and CelebA datasets despite using no group annotations and up to $7.5\times$ fewer class annotations.

Author Information

Tyler LaBonte (Georgia Institute of Technology)
Tyler LaBonte

I am a second-year PhD student in Machine Learning at the Georgia Institute of Technology advised by Jake Abernethy and Vidya Muthukumar. I completed my BS in Applied and Computational Mathematics at the University of Southern California, where I was a Trustee Scholar and Viterbi Fellow. My work is generously supported by the DoD NDSEG Fellowship. I am interested in advancing our scientific understanding of deep learning using both theory and experimentation. My current focus is characterizing the generalization phenomena of overparameterized neural networks and developing provable algorithms for efficient, accurate, and robust learning. I also enjoy applying mathematically-justified techniques to large-scale computer vision problems. The ultimate goal of my research is to enable the safe and trusted deployment of deep learning systems in high-consequence applications such as medicine, defense, and energy.

Abhishek Kumar (Google Brain)
Vidya Muthukumar (Georgia Institute of Technology)

More from the Same Authors