Timezone: »
Out-of-domain (OOD) generalization is a significant challenge for machine learning models. Many techniques have been proposed to overcome this challenge, often focused on learning models with certain invariance properties. In this work, we draw a link between OOD performance and model calibration, arguing that calibration across multiple domains can be viewed as a special case of an invariant representation leading to better OOD generalization. Specifically, we show that under certain conditions, models which achieve \emph{multi-domain calibration} are provably free of spurious correlations. This leads us to propose multi-domain calibration as a measurable and trainable surrogate for the OOD performance of a classifier. We therefore introduce methods that are easy to apply and allow practitioners to improve multi-domain calibration by training or modifying an existing model, leading to better performance on unseen domains. Using four datasets from the recently proposed WILDS OOD benchmark, as well as the Colored MNIST, we demonstrate that training or tuning models so they are calibrated across multiple domains leads to significantly improved performance on unseen test domains. We believe this intriguing connection between calibration and OOD generalization is promising from both a practical and theoretical point of view.
Author Information
Yoav Wald (Johns Hopkins University)
Amir Feder (Technion - Israel Institute of Technology)
Amir Feder is a Postdoctoral Research Scientist in the Data Science Institute, working with Professor David Blei on causal inference and natural language processing. His research seeks to develop methods that integrate causality into natural language processing, and use them to build linguistically-informed algorithms for predicting and understanding human behavior. Through the paradigm of causal machine learning, Amir aims to build bridges between machine learning and the social sciences. Before joining Columbia, Amir received his PhD from the Technion, where he was advised by Roi Reichart and worked closely with Uri Shalit. In a previous (academic) life, Amir was an economics, statistics and history student at Tel Aviv University, the Hebrew University of Jerusalem and Northwestern University. Amir was the organizer of the First Workshop on Causal Inference and NLP (CI+NLP) at EMNLP 2021.
Daniel Greenfeld (Weizmann Institute)
Uri Shalit (Technion)
More from the Same Authors
-
2021 : Covariate Shift of Latent Confounders in Imitation and Reinforcement Learning »
Guy Tennenholtz · Assaf Hallak · Gal Dalal · Shie Mannor · Gal Chechik · Uri Shalit -
2022 : An Invariant Learning Characterization of Controlled Text Generation »
Claudia Shi · Carolina Zheng · Keyon Vafa · Amir Feder · David Blei -
2022 : Malign Overfitting: Interpolation and Invariance are Fundamentally at Odds »
Yoav Wald · Gal Yona · Uri Shalit · Yair Carmon -
2022 : Useful Confidence Measures: Beyond the Max Score »
Gal Yona · Amir Feder · Itay Laish -
2022 : An Invariant Learning Characterization of Controlled Text Generation »
Claudia Shi · Carolina Zheng · Keyon Vafa · Amir Feder · David Blei -
2023 Poster: Why models take shortcuts when roads are perfect: Understanding and mitigating shortcut learning in tasks with perfect stable features »
Aahlad Manas Puli · Lily Zhang · Yoav Wald · Rajesh Ranganath -
2023 Poster: Causal-structure Driven Augmentations for Text OOD Generalization »
Amir Feder · Yoav Wald · Claudia Shi · Suchi Saria · David Blei -
2023 Poster: Evaluating the Moral Beliefs Encoded in LLMs »
Nino Scherrer · Claudia Shi · Amir Feder · David Blei -
2022 : An Invariant Learning Characterization of Controlled Text Generation »
Claudia Shi · Carolina Zheng · Keyon Vafa · Amir Feder · David Blei -
2022 Poster: CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior »
Eldar D Abraham · Karel D'Oosterlinck · Amir Feder · Yair Gat · Atticus Geiger · Christopher Potts · Roi Reichart · Zhengxuan Wu -
2022 Poster: In the Eye of the Beholder: Robust Prediction with Causal User Modeling »
Amir Feder · Guy Horowitz · Yoav Wald · Roi Reichart · Nir Rosenfeld -
2021 Poster: Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data »
Andrew Jesson · Panagiotis Tigas · Joost van Amersfoort · Andreas Kirsch · Uri Shalit · Yarin Gal -
2019 Poster: Globally Optimal Learning for Structured Elliptical Losses »
Yoav Wald · Nofar Noy · Gal Elidan · Ami Wiesel -
2017 Workshop: Machine Learning for Health (ML4H) - What Parts of Healthcare are Ripe for Disruption by Machine Learning Right Now? »
Jason Fries · Alex Wiltschko · Andrew Beam · Isaac S Kohane · Jasper Snoek · Peter Schulam · Madalina Fiterau · David Kale · Rajesh Ranganath · Bruno Jedynak · Michael Hughes · Tristan Naumann · Natalia Antropova · Adrian Dalca · SHUBHI ASTHANA · Prateek Tandon · Jaz Kandola · Uri Shalit · Marzyeh Ghassemi · Tim Althoff · Alexander Ratner · Jumana Dakka -
2017 Poster: Causal Effect Inference with Deep Latent-Variable Models »
Christos Louizos · Uri Shalit · Joris Mooij · David Sontag · Richard Zemel · Max Welling -
2017 Poster: Robust Conditional Probabilities »
Yoav Wald · Amir Globerson -
2016 Workshop: Machine Learning for Health »
Uri Shalit · Marzyeh Ghassemi · Jason Fries · Rajesh Ranganath · Theofanis Karaletsos · David Kale · Peter Schulam · Madalina Fiterau -
2010 Spotlight: Online Learning in The Manifold of Low-Rank Matrices »
Uri Shalit · Daphna Weinshall · Gal Chechik -
2010 Poster: Online Learning in The Manifold of Low-Rank Matrices »
Uri Shalit · Daphna Weinshall · Gal Chechik -
2009 Poster: An Online Algorithm for Large Scale Image Similarity Learning »
Gal Chechik · Uri Shalit · Varun Sharma · Samy Bengio