Timezone: »

 
Leveraging Unlabeled Data to Predict Out-of-Distribution Performance
Saurabh Garg · Sivaraman Balakrishnan · Zachary Lipton · Behnam Neyshabur · Hanie Sedghi
Event URL: https://openreview.net/forum?id=wcrff7Gh0RR »
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributionsthat may cause performance drops. In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data. We propose Average Thresholded Confidence (ATC), a practical method that learns a \emph{threshold} on the model's confidence, predicting accuracy as the fraction of unlabeled examples for which model confidence exceeds that threshold. ATC outperforms previous methods across several model architectures, types of distribution shifts (e.g., due to synthetic corruptions, dataset reproduction, or novel subpopulations), and datasets (\textsc{Wilds}-FMoW, ImageNet, \breeds, CIFAR, and MNIST). In our experiments, ATC estimates target performance $2\text{--}4\times$ more accurately than prior methods. We also explore the theoretical foundations of the problem, proving that, in general, identifying the accuracy is just as hard as identifying the optimal predictor and thus, the efficacy of any method rests upon (perhaps unstated) assumptions on the nature of the shift. Finally, analyzing our method on some toy distributions, we provide insights concerning when it works.

Author Information

Saurabh Garg (CMU)
Sivaraman Balakrishnan (Carnegie Mellon University)
Zachary Lipton (Carnegie Mellon University)
Behnam Neyshabur (Google)

I am a staff research scientist at Google. Before that, I was a postdoctoral researcher at New York University and a member of Theoretical Machine Learning program at Institute for Advanced Study (IAS) in Princeton. In summer 2017, I received a PhD in computer science at TTI-Chicago where I was fortunate to be advised by Nati Srebro.

Hanie Sedghi (Google Research)
Hanie Sedghi

I am a senior research scientist at Google Brain, where I lead the “Deep Phenomena” team. My approach is to bond theory and practice in large-scale machine learning by designing algorithms with theoretical guarantees that also work efficiently in practice. Over the recent years, I have been working on understanding and improving deep learning. Prior to Google, I was a Research Scientist at Allen Institute for Artificial Intelligence and before that, a postdoctoral fellow at UC Irvine. I received my PhD from University of Southern California with a minor in mathematics in 2015.

More from the Same Authors