`

Timezone: »

 
Spotlight
Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization
Kartik Ahuja · Ethan Caballero · Dinghuai Zhang · Jean-Christophe Gagnon-Audet · Yoshua Bengio · Ioannis Mitliagkas · Irina Rish

@ None

The invariance principle from causality is at the heart of notable approaches such as invariant risk minimization (IRM) that seek to address out-of-distribution (OOD) generalization failures. Despite the promising theory, invariance principle-based approaches fail in common classification tasks, where invariant (causal) features capture all the information about the label. Are these failures due to the methods failing to capture the invariance? Or is the invariance principle itself insufficient? To answer these questions, we revisit the fundamental assumptions in linear regression tasks, where invariance-based approaches were shown to provably generalize OOD. In contrast to the linear regression tasks, we show that for linear classification tasks we need much stronger restrictions on the distribution shifts, or otherwise OOD generalization is impossible. Furthermore, even with appropriate restrictions on distribution shifts in place, we show that the invariance principle alone is insufficient. We prove that a form of the information bottleneck constraint along with invariance helps address the key failures when invariant features capture all the information about the label and also retains the existing success when they do not. We propose an approach that incorporates both of these principles and demonstrate its effectiveness in several experiments.

Author Information

Kartik Ahuja (Mila)
Ethan Caballero (Mila)

https://www.google.com/#q=ethan+caballero

Dinghuai Zhang (Mila / Peking University)
Jean-Christophe Gagnon-Audet (Montreal Institute for Learning Algorithms, University of Montreal, University of Montreal)
Yoshua Bengio (Mila / U. Montreal)

Yoshua Bengio (PhD'1991 in Computer Science, McGill University). After two post-doctoral years, one at MIT with Michael Jordan and one at AT&T Bell Laboratories with Yann LeCun, he became professor at the department of computer science and operations research at Université de Montréal. Author of two books (a third is in preparation) and more than 200 publications, he is among the most cited Canadian computer scientists and is or has been associate editor of the top journals in machine learning and neural networks. Since '2000 he holds a Canada Research Chair in Statistical Learning Algorithms, since '2006 an NSERC Chair, since '2005 his is a Senior Fellow of the Canadian Institute for Advanced Research and since 2014 he co-directs its program focused on deep learning. He is on the board of the NIPS foundation and has been program chair and general chair for NIPS. He has co-organized the Learning Workshop for 14 years and co-created the International Conference on Learning Representations. His interests are centered around a quest for AI through machine learning, and include fundamental questions on deep learning, representation learning, the geometry of generalization in high-dimensional spaces, manifold learning and biologically inspired learning algorithms.

Ioannis Mitliagkas (University of Montreal)
Irina Rish (MILA / Université de Montréal)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors