Timezone: »
The goal of domain generalization is to train models that generalize well to unseen domains. To this end, the typical strategy is two-stage: first pre-training the network on a large corpus, then fine-tuning on the task's training domains. If the pre-training dataset is large enough, this pre-training is efficient because it will contain samples related to the unseen domains. Yet, large pre-training is costly and possible only for a few large companies. Rather than trying to cover all kinds of test distributions during pre-training, we propose to add a third stage: editing the featurizer after fine-tuning. To this end, we interpolate the featurizer with auxiliary featurizers trained on auxiliary datasets. This merging via weight averaging edits the main featurizer by including the features mechanisms learned on the auxiliary datasets. Empirically, we show that this editing strategy improves the performance of existing state-of-the-art models on the DomainBed benchmark by adapting the featurizer to the test domain. We hope to encourage updatable approaches beyond the direct transfer learning strategy.
Author Information
Alexandre Rame (FAIR Meta AI - ISIR)
Currently research intern at FAIR Meta AI. PhD student at Sorbonne University in Paris under the supervision of Professor Matthieu Cord. Trying to make deep neural networks generalize out of distribution.
Jianyu Zhang (New York University)
Leon Bottou (Facebook AI Research)
Léon Bottou received a Diplôme from l'Ecole Polytechnique, Paris in 1987, a Magistère en Mathématiques Fondamentales et Appliquées et Informatiques from Ecole Normale Supérieure, Paris in 1988, and a PhD in Computer Science from Université de Paris-Sud in 1991. He joined AT&T Bell Labs from 1991 to 1992 and AT&T Labs from 1995 to 2002. Between 1992 and 1995 he was chairman of Neuristique in Paris, a small company pioneering machine learning for data mining applications. He has been with NEC Labs America in Princeton since 2002. Léon's primary research interest is machine learning. His contributions to this field address theory, algorithms and large scale applications. Léon's secondary research interest is data compression and coding. His best known contribution in this field is the DjVu document compression technology (http://www.djvu.org.) Léon published over 70 papers and is serving on the boards of JMLR and IEEE TPAMI. He also serves on the scientific advisory board of Kxen Inc .
David Lopez-Paz (Meta AI)
More from the Same Authors
-
2021 : On the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2021 : On the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2021 : Poster: Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2022 Workshop: INTERPOLATE — First Workshop on Interpolation Regularizers and Beyond »
Yann Dauphin · David Lopez-Paz · Vikas Verma · Boyi Li -
2022 Poster: The Effects of Regularization and Data Augmentation are Class Dependent »
Randall Balestriero · Leon Bottou · Yann LeCun -
2022 Poster: Diverse Weight Averaging for Out-of-Distribution Generalization »
Alexandre Rame · Matthieu Kirchmeyer · Thibaud Rahier · Alain Rakotomamonjy · Patrick Gallinari · Matthieu Cord -
2021 : Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation »
Agnieszka Słowik · Leon Bottou -
2019 Poster: Cold Case: The Lost MNIST Digits »
Chhavi Yadav · Leon Bottou -
2019 Spotlight: Cold Case: The Lost MNIST Digits »
Chhavi Yadav · Leon Bottou -
2018 : Opening Remarks »
David Lopez-Paz -
2018 Workshop: Causal Learning »
Martin Arjovsky · Christina Heinze-Deml · Anna Klimovskaia · Maxime Oquab · Leon Bottou · David Lopez-Paz -
2018 Workshop: Smooth Games Optimization and Machine Learning »
Simon Lacoste-Julien · Ioannis Mitliagkas · Gauthier Gidel · Vasilis Syrgkanis · Eva Tardos · Leon Bottou · Sebastian Nowozin -
2018 Poster: SING: Symbol-to-Instrument Neural Generator »
Alexandre Defossez · Neil Zeghidour · Nicolas Usunier · Leon Bottou · Francis Bach -
2017 : Geometrical Insights for Unsupervised Learning »
Leon Bottou -
2017 : Looking for a Missing Signal »
Leon Bottou -
2017 Poster: Gradient Episodic Memory for Continual Learning »
David Lopez-Paz · Marc'Aurelio Ranzato -
2016 : Welcome »
David Lopez-Paz · Alec Radford · Leon Bottou -
2016 Workshop: Adversarial Training »
David Lopez-Paz · Leon Bottou · Alec Radford -
2015 Workshop: Optimization for Machine Learning (OPT2015) »
Suvrit Sra · Alekh Agarwal · Leon Bottou · Sashank J. Reddi -
2014 Workshop: Learning Semantics »
Cedric Archambeau · Antoine Bordes · Leon Bottou · Chris J Burges · David Grangier -
2014 Workshop: Deep Learning and Representation Learning »
Andrew Y Ng · Yoshua Bengio · Adam Coates · Roland Memisevic · Sharanyan Chetlur · Geoffrey E Hinton · Shamim Nemati · Bryan Catanzaro · Surya Ganguli · Herbert Jaeger · Phil Blunsom · Leon Bottou · Volodymyr Mnih · Chen-Yu Lee · Rich M Schwartz -
2013 Workshop: NIPS 2013 Workshop on Causality: Large-scale Experiment Design and Inference of Causal Mechanisms »
Isabelle Guyon · Leon Bottou · Bernhard Schölkopf · Alexander Statnikov · Evelyne Viegas · james m robins -
2011 Workshop: Learning Semantics »
Antoine Bordes · Jason E Weston · Ronan Collobert · Leon Bottou -
2007 Tutorial: Learning Using Many Examples »
Leon Bottou · Andrew W Moore -
2007 Poster: The Tradeoffs of Large Scale Learning »
Leon Bottou · Olivier Bousquet