Timezone: »
We give a method for proactively identifying small, plausible shifts in distribution which lead to large differences in model performance. These shifts are defined via parametric changes in the causal mechanisms of observed variables, where constraints on parameters yield a "robustness set" of plausible distributions and a corresponding worst-case loss over the set. While the loss under an individual parametric shift can be estimated via reweighting techniques such as importance sampling, the resulting worst-case optimization problem is non-convex, and the estimate may suffer from large variance. For small shifts, however, we can construct a local second-order approximation to the loss under shift and cast the problem of finding a worst-case shift as a particular non-convex quadratic optimization problem, for which efficient algorithms are available. We demonstrate that this second-order approximation can be estimated directly for shifts in conditional exponential family models, and we bound the approximation error. We apply our approach to a computer vision task (classifying gender from images), revealing sensitivity to shifts in non-causal attributes.
Author Information
Nikolaj Thams (University of Copenhagen)
Michael Oberst (MIT)
David Sontag (MIT)
More from the Same Authors
-
2022 : PEST: Combining Parameter-Efficient Fine-Tuning with Self-Training and Co-Training »
Hunter Lang · Monica Agrawal · Yoon Kim · David Sontag -
2022 Poster: Falsification before Extrapolation in Causal Effect Estimation »
Zeshan M Hussain · Michael Oberst · Ming-Chieh Shih · David Sontag -
2022 Poster: Training Subset Selection for Weak Supervision »
Hunter Lang · Aravindan Vijayaraghavan · David Sontag -
2022 Poster: ETAB: A Benchmark Suite for Visual Representation Learning in Echocardiography »
Ahmed M. Alaa · Anthony Philippakis · David Sontag -
2021 Poster: Finding Regions of Heterogeneity in Decision-Making via Expected Conditional Covariance »
Justin Lim · Christina Ji · Michael Oberst · Saul Blecker · Leora Horwitz · David Sontag -
2018 : TBC 13 »
David Sontag -
2018 Poster: Why Is My Classifier Discriminatory? »
Irene Chen · Fredrik Johansson · David Sontag -
2018 Spotlight: Why Is My Classifier Discriminatory? »
Irene Chen · Fredrik Johansson · David Sontag -
2017 : Invited Talk 4 »
David Sontag -
2017 Poster: Causal Effect Inference with Deep Latent-Variable Models »
Christos Louizos · Uri Shalit · Joris Mooij · David Sontag · Richard Zemel · Max Welling -
2015 Workshop: Machine Learning For Healthcare (MLHC) »
Theofanis Karaletsos · Rajesh Ranganath · Suchi Saria · David Sontag -
2015 Poster: Barrier Frank-Wolfe for Marginal Inference »
Rahul G Krishnan · Simon Lacoste-Julien · David Sontag -
2013 Poster: Discovering Hidden Variables in Noisy-Or Networks using Quartet Tests »
Yacine Jernite · Yoni Halpern · David Sontag -
2011 Poster: Complexity of Inference in Latent Dirichlet Allocation »
David Sontag · Daniel Roy -
2011 Spotlight: Complexity of Inference in Latent Dirichlet Allocation »
David Sontag · Daniel Roy -
2010 Spotlight: More data means less inference: A pseudo-max approach to structured learning »
David Sontag · Ofer Meshi · Tommi Jaakkola · Amir Globerson -
2010 Poster: More data means less inference: A pseudo-max approach to structured learning »
David Sontag · Ofer Meshi · Tommi Jaakkola · Amir Globerson -
2009 Workshop: Approximate Learning of Large Scale Graphical Models »
Russ Salakhutdinov · Amir Globerson · David Sontag -
2008 Workshop: Approximate inference - how far have we come? »
Amir Globerson · David Sontag · Tommi Jaakkola -
2008 Poster: Clusters and Coarse Partitions in LP Relaxations »
David Sontag · Amir Globerson · Tommi Jaakkola -
2008 Spotlight: Clusters and Coarse Partitions in LP Relaxations »
David Sontag · Amir Globerson · Tommi Jaakkola -
2007 Oral: New Outer Bounds on the Marginal Polytope »
David Sontag · Tommi Jaakkola -
2007 Poster: New Outer Bounds on the Marginal Polytope »
David Sontag · Tommi Jaakkola