Timezone: »
Large sparsely-activated models have obtained excellent performance in multiple domains.However, such models are typically trained on a single modality at a time.We present the Language-Image MoE, LIMoE, a sparse mixture of experts model capable of multimodal learning.LIMoE accepts both images and text simultaneously, while being trained using a contrastive loss.MoEs are a natural fit for a multimodal backbone, since expert layers can learn an appropriate partitioning of modalities.However, new challenges arise; in particular, training stability and balanced expert utilization, for which we propose an entropy-based regularization scheme.Across multiple scales, we demonstrate performance improvement over dense models of equivalent computational cost.LIMoE-L/16 trained comparably to CLIP-L/14 achieves 77.9% zero-shot ImageNet accuracy (vs. 76.2%), and when further scaled to H/14 (with additional data) it achieves 83.8%, approaching state-of-the-art methods which use custom per-modality backbones and pre-training schemes.We analyse the quantitative and qualitative behavior of LIMoE, and demonstrate phenomena such as differing treatment of the modalities and the emergence of modality-specific experts.
Author Information
Basil Mustafa (Google)
Carlos Riquelme (Google Brain)
Joan Puigcerver (Google Research)
Rodolphe Jenatton (Amazon Research)
Neil Houlsby (Google)
More from the Same Authors
-
2021 : A Unified Few-Shot Classification Benchmark to Compare Transfer and Meta Learning Approaches »
Vincent Dumoulin · Neil Houlsby · Utku Evci · Xiaohua Zhai · Ross Goroshin · Sylvain Gelly · Hugo Larochelle -
2022 : Panel »
Erin Grant · Richard Turner · Neil Houlsby · Priyanka Agrawal · Abhijeet Awasthi · Salomey Osei -
2022 Poster: On the Adversarial Robustness of Mixture of Experts »
Joan Puigcerver · Rodolphe Jenatton · Carlos Riquelme · Pranjal Awasthi · Srinadh Bhojanapalli -
2022 Poster: UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes »
Alexander Kolesnikov · AndrĂ© Susano Pinto · Lucas Beyer · Xiaohua Zhai · Jeremiah Harmsen · Neil Houlsby -
2021 Poster: MLP-Mixer: An all-MLP Architecture for Vision »
Ilya Tolstikhin · Neil Houlsby · Alexander Kolesnikov · Lucas Beyer · Xiaohua Zhai · Thomas Unterthiner · Jessica Yung · Andreas Steiner · Daniel Keysers · Jakob Uszkoreit · Mario Lucic · Alexey Dosovitskiy -
2021 Poster: Scaling Vision with Sparse Mixture of Experts »
Carlos Riquelme · Joan Puigcerver · Basil Mustafa · Maxim Neumann · Rodolphe Jenatton · AndrĂ© Susano Pinto · Daniel Keysers · Neil Houlsby -
2021 Poster: Revisiting the Calibration of Modern Neural Networks »
Matthias Minderer · Josip Djolonga · Rob Romijnders · Frances Hubis · Xiaohua Zhai · Neil Houlsby · Dustin Tran · Mario Lucic -
2019 Poster: Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates »
Carlos Riquelme · Hugo Penedones · Damien Vincent · Hartmut Maennel · Sylvain Gelly · Timothy A Mann · Andre Barreto · Gergely Neu -
2019 Poster: Practical and Consistent Estimation of f-Divergences »
Paul Rubenstein · Olivier Bousquet · Josip Djolonga · Carlos Riquelme · Ilya Tolstikhin -
2018 Poster: Scalable Hyperparameter Transfer Learning »
Valerio Perrone · Rodolphe Jenatton · Matthias W Seeger · Cedric Archambeau -
2013 Poster: Convex Relaxations for Permutation Problems »
Fajwel Fogel · Rodolphe Jenatton · Francis Bach · Alexandre d'Aspremont