Timezone: »
Poster
Invariant and Transportable Representations for Anti-Causal Domain Shifts
Yibo Jiang · Victor Veitch
Real-world classification problems must contend with domain shift, the (potential) mismatch between the domain where a model is deployed and the domain(s) where the training data was gathered. Methods to handle such problems must specify what structure is held in common between the domains and what is allowed to vary. A natural assumption is that causal (structural) relationships are invariant in all domains. Then, it is tempting to learn a predictor for label $Y$ that depends only on its causal parents. However, many real-world problems are ``anti-causal'' in the sense that $Y$ is a cause of the covariates $X$---in this case, $Y$ has no causal parents and the naive causal invariance is useless. In this paper, we study representation learning under a particular notion of domain shift that both respects causal invariance and that naturally handles the ``anti-causal'' structure. We show how to leverage the shared causal structure of the domains to learn a representation that both admits an invariant predictor and that also allows fast adaptation in new domains. The key is to translate causal assumptions into learning principles that disentangle ``invariant'' and ``non-stable'' features. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed learning algorithm.
Author Information
Yibo Jiang (University of Chicago)
Victor Veitch (University of Chicago, Google)
More from the Same Authors
-
2021 Spotlight: Counterfactual Invariance to Spurious Correlations in Text Classification »
Victor Veitch · Alexander D'Amour · Steve Yadlowsky · Jacob Eisenstein -
2021 : Using Embeddings to Estimate Peer Influence on Social Networks »
Irina Cristali · Victor Veitch -
2021 : Mitigating Overlap Violations in Causal Inference with Text Data »
Lin Gui · Victor Veitch -
2021 : Using Embeddings to Estimate Peer Influence on Social Networks »
Irina Cristali · Victor Veitch -
2022 : Causal Estimation for Text Data with (Apparent) Overlap Violations »
Lin Gui · Victor Veitch -
2022 Poster: Using Embeddings for Causal Estimation of Peer Influence in Social Networks »
Irina Cristali · Victor Veitch -
2021 Poster: Counterfactual Invariance to Spurious Correlations in Text Classification »
Victor Veitch · Alexander D'Amour · Steve Yadlowsky · Jacob Eisenstein -
2020 Poster: Sense and Sensitivity Analysis: Simple Post-Hoc Analysis of Bias Due to Unobserved Confounding »
Victor Veitch · Anisha Zaveri -
2020 Spotlight: Sense and Sensitivity Analysis: Simple Post-Hoc Analysis of Bias Due to Unobserved Confounding »
Victor Veitch · Anisha Zaveri -
2019 : Coffee break, posters, and 1-on-1 discussions »
Julius von Kügelgen · David Rohde · Candice Schumann · Grace Charles · Victor Veitch · Vira Semenova · Mert Demirer · Vasilis Syrgkanis · Suraj Nair · Aahlad Puli · Masatoshi Uehara · Aditya Gopalan · Yi Ding · Ignavier Ng · Khashayar Khosravi · Eli Sherman · Shuxi Zeng · Aleksander Wieczorek · Hao Liu · Kyra Gan · Jason Hartford · Miruna Oprescu · Alexander D'Amour · Jörn Boehnke · Yuta Saito · Théophile Griveau-Billion · Chirag Modi · Shyngys Karimov · Jeroen Berrevoets · Logan Graham · Imke Mayer · Dhanya Sridhar · Issa Dahabreh · Alan Mishler · Duncan Wadsworth · Khizar Qureshi · Rahul Ladhania · Gota Morishita · Paul Welle -
2019 Poster: Using Embeddings to Correct for Unobserved Confounding in Networks »
Victor Veitch · Yixin Wang · David Blei -
2019 Poster: Adapting Neural Networks for the Estimation of Treatment Effects »
Claudia Shi · David Blei · Victor Veitch -
2015 : The general class of (sparse) random graphs arising from exchangeable point processes »
Victor Veitch