Timezone: »
A common challenge across all areas of machine learning is that training data is not distributed like test data, due to natural shifts or adversarial examples; such examples are referred to as out-of-distribution (OOD) test examples. We consider a model where one may abstain from predicting, at a fixed cost. In particular, our transductive abstention algorithm takes labeled training examples and unlabeled test examples as input, and provides predictions with optimal prediction loss guarantees. The loss bounds match standard generalization bounds when test examples are i.i.d. from the training distribution, but add an additional term that is the cost of abstaining times the statistical distance between the train and test distribution (or the fraction of adversarial examples). For linear regression, we give a polynomial-time algorithm based on Celis-Dennis-Tapia optimization algorithms. For binary classification, we show how to efficiently implement it using a proper agnostic learner (i.e., an Empirical Risk Minimizer) for the class of interest. Our work builds on recent work of Goldwasser, Kalais, and Montasser (2020) who gave error and abstention guarantees for transductive binary classification.
Author Information
Adam Kalai (Microsoft Research New England (-(-_(-_-)_-)-))
Varun Kanade (University of Oxford)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Towards optimally abstaining from prediction with OOD test examples »
Dates n/a. Room
More from the Same Authors
-
2021 : Programming Puzzles »
Tal Schuster · Ashwin Kalyan · Alex Polozov · Adam Kalai -
2022 : When are Local Queries Useful for Robust Learning? »
Pascale Gourdeau · Varun Kanade · Marta Kwiatkowska · James Worrell -
2022 : Language Models Can Teach Themselves to Program Better »
Patrick Haluptzok · Matthew Bowers · Adam Kalai -
2022 Spotlight: Are GANs overkill for NLP? »
David Alvarez-Melis · Vikas Garg · Adam Kalai -
2022 : A Theory of Unsupervised Translation for Understanding Animal Communication »
Shafi Goldwasser · David Gruber · Adam Kalai · Orr Paradise -
2022 Poster: When are Local Queries Useful for Robust Learning? »
Pascale Gourdeau · Varun Kanade · Marta Kwiatkowska · James Worrell -
2022 Poster: Are GANs overkill for NLP? »
David Alvarez-Melis · Vikas Garg · Adam Kalai -
2022 Poster: Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms »
Surbhi Goel · Sham Kakade · Adam Kalai · Cyril Zhang -
2021 : Programming Puzzles »
Tal Schuster · Ashwin Kalyan · Alex Polozov · Adam Kalai -
2020 Poster: The Statistical Complexity of Early-Stopped Mirror Descent »
Tomas Vaskevicius · Varun Kanade · Patrick Rebeschini -
2020 Spotlight: The Statistical Complexity of Early-Stopped Mirror Descent »
Tomas Vaskevicius · Varun Kanade · Patrick Rebeschini -
2018 Poster: Clustering Redemption–Beyond the Impossibility of Kleinberg’s Axioms »
Vincent Cohen-Addad · Varun Kanade · Frederik Mallmann-Trenn -
2018 Poster: Supervising Unsupervised Learning »
Vikas Garg · Adam Kalai -
2018 Spotlight: Supervising Unsupervised Learning »
Vikas Garg · Adam Kalai -
2017 Poster: Hierarchical Clustering Beyond the Worst-Case »
Vincent Cohen-Addad · Varun Kanade · Frederik Mallmann-Trenn -
2017 Poster: From which world is your graph »
Cheng Li · Felix MF Wong · Zhenming Liu · Varun Kanade -
2011 Poster: Efficient Learning of Generalized Linear and Single Index Models with Isotonic Regression »
Sham M Kakade · Adam Kalai · Varun Kanade · Ohad Shamir -
2009 Poster: Potential-Based Agnostic Boosting »
Adam Kalai · Varun Kanade