Timezone: »
Consider the problem of estimating the causal effect of some attribute of a text document; for example: what effect does writing a polite vs. rude email have on response time? To estimate a causal effect from observational data, we need to adjust for confounding aspects of the text that affect both the treatment and outcome---e.g., the topic or writing level of the text. These confounding aspects are unknown a priori, so it seems natural to adjust for the entirety of the text (e.g., using a transformer). However, causal identification and estimation procedures rely on the assumption of overlap: for all levels of the adjustment variables, there is randomness leftover so that every unit could have (not) received treatment. Since the treatment here is itself an attribute of the text, it is perfectly determined, and overlap is apparently violated. The purpose of this paper is to show how to handle causal identification and obtain robust causal estimation in the presence of apparent overlap violations. In brief, the idea is to use supervised representation learning to produce a data representation that preserves confounding information while eliminating information that is only predictive of the treatment. This representation then suffices for adjustment and satisfies overlap. Adapting results on non-parametric estimation, we find that this procedure is robust to conditional outcome misestimation, yielding a low-bias estimator with valid uncertainty quantification under weak conditions. Empirical results show strong improvements in bias and uncertainty quantification relative to the natural baseline.
Author Information
Lin Gui (The University of Chicago)
Victor Veitch (University of Chicago, Google)
More from the Same Authors
-
2021 Spotlight: Counterfactual Invariance to Spurious Correlations in Text Classification »
Victor Veitch · Alexander D'Amour · Steve Yadlowsky · Jacob Eisenstein -
2021 : Using Embeddings to Estimate Peer Influence on Social Networks »
Irina Cristali · Victor Veitch -
2021 : Mitigating Overlap Violations in Causal Inference with Text Data »
Lin Gui · Victor Veitch -
2021 : Using Embeddings to Estimate Peer Influence on Social Networks »
Irina Cristali · Victor Veitch -
2022 Poster: Using Embeddings for Causal Estimation of Peer Influence in Social Networks »
Irina Cristali · Victor Veitch -
2022 Poster: Invariant and Transportable Representations for Anti-Causal Domain Shifts »
Yibo Jiang · Victor Veitch -
2021 Poster: Counterfactual Invariance to Spurious Correlations in Text Classification »
Victor Veitch · Alexander D'Amour · Steve Yadlowsky · Jacob Eisenstein -
2020 Poster: Sense and Sensitivity Analysis: Simple Post-Hoc Analysis of Bias Due to Unobserved Confounding »
Victor Veitch · Anisha Zaveri -
2020 Spotlight: Sense and Sensitivity Analysis: Simple Post-Hoc Analysis of Bias Due to Unobserved Confounding »
Victor Veitch · Anisha Zaveri -
2019 : Coffee break, posters, and 1-on-1 discussions »
Julius von Kügelgen · David Rohde · Candice Schumann · Grace Charles · Victor Veitch · Vira Semenova · Mert Demirer · Vasilis Syrgkanis · Suraj Nair · Aahlad Puli · Masatoshi Uehara · Aditya Gopalan · Yi Ding · Ignavier Ng · Khashayar Khosravi · Eli Sherman · Shuxi Zeng · Aleksander Wieczorek · Hao Liu · Kyra Gan · Jason Hartford · Miruna Oprescu · Alexander D'Amour · Jörn Boehnke · Yuta Saito · Théophile Griveau-Billion · Chirag Modi · Shyngys Karimov · Jeroen Berrevoets · Logan Graham · Imke Mayer · Dhanya Sridhar · Issa Dahabreh · Alan Mishler · Duncan Wadsworth · Khizar Qureshi · Rahul Ladhania · Gota Morishita · Paul Welle -
2019 Poster: Using Embeddings to Correct for Unobserved Confounding in Networks »
Victor Veitch · Yixin Wang · David Blei -
2019 Poster: Adapting Neural Networks for the Estimation of Treatment Effects »
Claudia Shi · David Blei · Victor Veitch -
2015 : The general class of (sparse) random graphs arising from exchangeable point processes »
Victor Veitch