Timezone: »
Poster
Domain Adaptation under Open Set Label Shift
Saurabh Garg · Sivaraman Balakrishnan · Zachary Lipton
We introduce the problem of domain adaptation under Open Set Label Shift (OSLS), where the label distribution can change arbitrarily and a new class may arrive during deployment, but the class-conditional distributions $p(x|y)$ are domain-invariant. OSLS subsumes domain adaptation under label shift and Positive-Unlabeled (PU) learning. The learner's goals here are two-fold: (a) estimate the target label distribution, including the novel class; and (b) learn a target classifier. First, we establish the necessary and sufficient for identifying these quantities. Second, motivated by advances in label shift and PU learning, we propose practical methods for both tasks that leverage black-box predictors. Unlike typical Open Set Domain Adaptation (OSDA) problems, which tend to be ill-posed and amenable only to heuristics, OSLS offers a well-posed problem amenable to more principled machinery. Experiments across numerous semi-synthetic benchmarks on vision, language, and medical datasets demonstrate that our methods consistently outperform OSDA baselines, achieving $10$--$25\%$ improvements in target domain accuracy. Finally, we analyze the proposed methods, establishing finite-sample convergence to the true label marginal and convergence to optimal classifier for linear models in a Gaussian setup. Code is available at https://github.com/acmi-lab/Open-Set-Label-Shift.
Author Information
Saurabh Garg (Carnegie Mellon University)
Sivaraman Balakrishnan (Carnegie Mellon University)
Zachary Lipton (Carnegie Mellon University)
More from the Same Authors
-
2021 Spotlight: Mixture Proportion Estimation and PU Learning:A Modern Approach »
Saurabh Garg · Yifan Wu · Alexander Smola · Sivaraman Balakrishnan · Zachary Lipton -
2021 : Model-Free Learning for Continuous Timing as an Action »
Helen Zhou · David Childers · Zachary Lipton -
2021 : Leveraging Unlabeled Data to Predict Out-of-Distribution Performance »
Saurabh Garg · Sivaraman Balakrishnan · Zachary Lipton · Behnam Neyshabur · Hanie Sedghi -
2022 : Downstream Datasets Make Surprisingly Good Pretraining Corpora »
Kundan Krishna · Saurabh Garg · Jeffrey Bigham · Zachary Lipton -
2022 : Disentangling the Mechanisms Behind Implicit Regularization in SGD »
Zachary Novack · Simran Kaur · Tanya Marwah · Saurabh Garg · Zachary Lipton -
2022 : Deconstructing Distributions: A Pointwise Framework of Learning »
Gal Kaplun · Nikhil Ghosh · Saurabh Garg · Boaz Barak · Preetum Nakkiran -
2022 : RLSBench: A Large-Scale Empirical Study of Domain Adaptation Under Relaxed Label Shift »
Saurabh Garg · Nick Erickson · James Sharpnack · Alexander Smola · Sivaraman Balakrishnan · Zachary Lipton -
2022 : Local Causal Discovery for Estimating Causal Effects »
Shantanu Gupta · David Childers · Zachary Lipton -
2022 : On the Maximum Hessian Eigenvalue and Generalization »
Simran Kaur · Jeremy M Cohen · Zachary Lipton -
2022 : Panel on Technical Challenges Associated with Reliable Human Evaluations of Generative Models »
Long Ouyang · Tongshuang Wu · Zachary Lipton -
2022 Workshop: Human Evaluation of Generative Models »
Divyansh Kaushik · Jennifer Hsia · Jessica Huynh · Yonadav Shavit · Samuel Bowman · Ting-Hao Huang · Douwe Kiela · Zachary Lipton · Eric Michael Smith -
2022 Poster: Characterizing Datapoints via Second-Split Forgetting »
Pratyush Maini · Saurabh Garg · Zachary Lipton · J. Zico Kolter -
2022 Poster: Unsupervised Learning under Latent Label Shift »
Manley Roberts · Pranav Mani · Saurabh Garg · Zachary Lipton -
2021 Poster: Mixture Proportion Estimation and PU Learning:A Modern Approach »
Saurabh Garg · Yifan Wu · Alexander Smola · Sivaraman Balakrishnan · Zachary Lipton -
2020 : Contributed Talk 1: Fairness Under Partial Compliance »
Jessica Dai · Zachary Lipton -
2020 : Q & A and Panel Session with Tom Mitchell, Jenn Wortman Vaughan, Sanjoy Dasgupta, and Finale Doshi-Velez »
Tom Mitchell · Jennifer Wortman Vaughan · Sanjoy Dasgupta · Finale Doshi-Velez · Zachary Lipton -
2020 Workshop: HAMLETS: Human And Model in the Loop Evaluation and Training Strategies »
Divyansh Kaushik · Bhargavi Paranjape · Forough Arabshahi · Yanai Elazar · Yixin Nie · Max Bartolo · Polina Kirichenko · Pontus Lars Erik Saito Stenetorp · Mohit Bansal · Zachary Lipton · Douwe Kiela -
2020 Poster: A Unified View of Label Shift Estimation »
Saurabh Garg · Yifan Wu · Sivaraman Balakrishnan · Zachary Lipton -
2020 Poster: On Learning Ising Models under Huber's Contamination Model »
Adarsh Prasad · Vishwak Srinivasan · Sivaraman Balakrishnan · Pradeep Ravikumar -
2019 Poster: Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift »
Stephan Rabanser · Stephan Günnemann · Zachary Lipton -
2019 Poster: Learning Robust Global Representations by Penalizing Local Predictive Power »
Haohan Wang · Songwei Ge · Zachary Lipton · Eric Xing -
2019 Poster: Game Design for Eliciting Distinguishable Behavior »
Fan Yang · Liu Leqi · Yifan Wu · Zachary Lipton · Pradeep Ravikumar · Tom M Mitchell · William Cohen -
2018 : Invited Talk 1 »
Zachary Lipton -
2018 : Panel on research process »
Zachary Lipton · Charles Sutton · Finale Doshi-Velez · Hanna Wallach · Suchi Saria · Rich Caruana · Thomas Rainforth -
2018 : Zachary Lipton »
Zachary Lipton -
2018 Poster: How Many Samples are Needed to Estimate a Convolutional Neural Network? »
Simon Du · Yining Wang · Xiyu Zhai · Sivaraman Balakrishnan · Russ Salakhutdinov · Aarti Singh -
2018 Poster: Optimization of Smooth Functions with Noisy Observations: Local Minimax Rates »
Yining Wang · Sivaraman Balakrishnan · Aarti Singh -
2018 Poster: Does mitigating ML's impact disparity require treatment disparity? »
Zachary Lipton · Julian McAuley · Alexandra Chouldechova