Timezone: »
When a deep learning model is deployed in the wild, it can encounter test data drawn from distributions different from the training data distribution and suffer drop in performance. For safe deployment, it is essential to estimate the accuracy of the pre-trained model on the test data. However, the labels for the test inputs are usually not immediately available in practice, and obtaining them can be expensive. This observation leads to two challenging tasks: (1) unsupervised accuracy estimation, which aims to estimate the accuracy of a pre-trained classifier on a set of unlabeled test inputs; (2) error detection, which aims to identify mis-classified test inputs. In this paper, we propose a principled and practically effective framework that simultaneously addresses the two tasks. The proposed framework iteratively learns an ensemble of models to identify mis-classified data points and performs self-training to improve the ensemble with the identified points. Theoretical analysis demonstrates that our framework enjoys provable guarantees for both accuracy estimation and error detection under mild conditions readily satisfied by practical deep learning models. Along with the framework, we proposed and experimented with two instantiations and achieved state-of-the-art results on 59 tasks. For example, on iWildCam, one instantiation reduces the estimation error for unsupervised accuracy estimation by at least 70% and improves the F1 score for error detection by at least 4.7% compared to existing methods.
Author Information
Jiefeng Chen (University of Wisconsin-Madison)
I am currently a third year Phd student at University of Wisconsin-Madison, in the Computer Science Department. I am co-advised by Prof. Yingyu Liang and Prof. Somesh Jha. I work on trustworthy machine learning with research questions like "How to make machine learning models produce stable explanations of their predictions?", "How to train models that produce robust predictions under adversarial perturbations?, and "Understand when and why some defense mechanisms work?". I obtained my Bachelor's degree in Computer Science from Shanghai Jiao Tong University (SJTU).
Frederick Liu (Google)
Besim Avci (Google)
Xi Wu (Google)
Yingyu Liang (Princeton University)
Somesh Jha (University of Wisconsin, Madison)
More from the Same Authors
-
2022 : Best of Both Worlds: Towards Adversarial Robustness with Transduction and Rejection »
Nils Palumbo · Yang Guo · Xi Wu · Jiefeng Chen · Yingyu Liang · Somesh Jha -
2022 Spotlight: Lightning Talks 2A-2 »
Harikrishnan N B · Jianhao Ding · Juha Harviainen · Yizhen Wang · Lue Tao · Oren Mangoubi · Tong Bu · Nisheeth Vishnoi · Mohannad Alhanahnah · Mikko Koivisto · Aditi Kathpalia · Lei Feng · Nithin Nagaraj · Hongxin Wei · Xiaozhu Meng · Petteri Kaski · Zhaofei Yu · Tiejun Huang · Ke Wang · Jinfeng Yi · Jian Liu · Sheng-Jun Huang · Mihai Christodorescu · Songcan Chen · Somesh Jha -
2022 Spotlight: Robust Learning against Relational Adversaries »
Yizhen Wang · Mohannad Alhanahnah · Xiaozhu Meng · Ke Wang · Mihai Christodorescu · Somesh Jha -
2022 Poster: Chefs' Random Tables: Non-Trigonometric Random Features »
Valerii Likhosherstov · Krzysztof M Choromanski · Kumar Avinava Dubey · Frederick Liu · Tamas Sarlos · Adrian Weller -
2022 Poster: Overparameterization from Computational Constraints »
Sanjam Garg · Somesh Jha · Saeed Mahloujifar · Mohammad Mahmoody · Mingyuan Wang -
2022 Poster: Robust Learning against Relational Adversaries »
Yizhen Wang · Mohannad Alhanahnah · Xiaozhu Meng · Ke Wang · Mihai Christodorescu · Somesh Jha -
2022 Poster: A Quantitative Geometric Approach to Neural-Network Smoothness »
Zi Wang · Gautam Prakriya · Somesh Jha -
2022 Poster: First is Better Than Last for Language Data Influence »
Chih-Kuan Yeh · Ankur Taly · Mukund Sundararajan · Frederick Liu · Pradeep Ravikumar -
2021 Poster: A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks »
Samuel Deng · Sanjam Garg · Somesh Jha · Saeed Mahloujifar · Mohammad Mahmoody · Abhradeep Guha Thakurta -
2019 Poster: Attribution-Based Confidence Metric For Deep Neural Networks »
Susmit Jha · Sunny Raj · Steven Fernandes · Sumit K Jha · Somesh Jha · Brian Jalaian · Gunjan Verma · Ananthram Swami -
2019 Poster: Robust Attribution Regularization »
Jiefeng Chen · Xi Wu · Vaibhav Rastogi · Yingyu Liang · Somesh Jha -
2018 : Semantic Adversarial Examples by Somesh Jha »
Somesh Jha -
2016 Poster: Recovery Guarantee of Non-negative Matrix Factorization via Alternating Updates »
Yuanzhi Li · Yingyu Liang · Andrej Risteski -
2015 Poster: Scale Up Nonlinear Component Analysis with Doubly Stochastic Gradients »
Bo Xie · Yingyu Liang · Le Song -
2014 Poster: Improved Distributed Principal Component Analysis »
Yingyu Liang · Maria-Florina F Balcan · Vandana Kanchanapally · David Woodruff -
2014 Poster: Learning Time-Varying Coverage Functions »
Nan Du · Yingyu Liang · Maria-Florina F Balcan · Le Song -
2014 Poster: Scalable Kernel Methods via Doubly Stochastic Gradients »
Bo Dai · Bo Xie · Niao He · Yingyu Liang · Anant Raj · Maria-Florina F Balcan · Le Song -
2013 Poster: Distributed k-means and k-median clustering on general communication topologies »
Maria-Florina F Balcan · Steven Ehrlich · Yingyu Liang