Timezone: »
Sparsely-gated Mixture of Experts networks (MoEs) have demonstrated excellent scalability in Natural Language Processing. In Computer Vision, however, almost all performant networks are "dense", that is, every input is processed by every parameter. We present a Vision MoE (V-MoE), a sparse version of the Vision Transformer, that is scalable and competitive with the largest dense networks. When applied to image recognition, V-MoE matches the performance of state-of-the-art networks, while requiring as little as half of the compute at inference time. Further, we propose an extension to the routing algorithm that can prioritize subsets of each input across the entire batch, leading to adaptive per-image compute. This allows V-MoE to trade-off performance and compute smoothly at test-time. Finally, we demonstrate the potential of V-MoE to scale vision models, and train a 15B parameter model that attains 90.35% on ImageNet.
Author Information
Carlos Riquelme (Google Brain)
Joan Puigcerver (Universitat Politecnica de Valencia)
Basil Mustafa (Google)
Maxim Neumann (Google)
Rodolphe Jenatton (Amazon)
André Susano Pinto (Google)
Daniel Keysers (Google Research, Brain Team)
Neil Houlsby (Google)
More from the Same Authors
-
2021 : A Unified Few-Shot Classification Benchmark to Compare Transfer and Meta Learning Approaches »
Vincent Dumoulin · Neil Houlsby · Utku Evci · Xiaohua Zhai · Ross Goroshin · Sylvain Gelly · Hugo Larochelle -
2021 : Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning »
Zachary Nado · Neil Band · Mark Collier · Josip Djolonga · Mike Dusenberry · Sebastian Farquhar · Qixuan Feng · Angelos Filos · Marton Havasi · Rodolphe Jenatton · Ghassen Jerfel · Jeremiah Liu · Zelda Mariet · Jeremy Nixon · Shreyas Padhy · Jie Ren · Tim G. J. Rudner · Yeming Wen · Florian Wenzel · Kevin Murphy · D. Sculley · Balaji Lakshminarayanan · Jasper Snoek · Yarin Gal · Dustin Tran -
2021 : Deep Classifiers with Label Noise Modeling and Distance Awareness »
Vincent Fortuin · Mark Collier · Florian Wenzel · James Allingham · Jeremiah Liu · Dustin Tran · Balaji Lakshminarayanan · Jesse Berent · Rodolphe Jenatton · Effrosyni Kokiopoulou -
2022 Panel: Panel 2C-4: UViM: A Unified… & K-LITE: Learning Transferable… »
Chunyuan Li · André Susano Pinto -
2022 : Panel »
Erin Grant · Richard Turner · Neil Houlsby · Priyanka Agrawal · Abhijeet Awasthi · Salomey Osei -
2022 Poster: On the Adversarial Robustness of Mixture of Experts »
Joan Puigcerver · Rodolphe Jenatton · Carlos Riquelme · Pranjal Awasthi · Srinadh Bhojanapalli -
2022 Poster: UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes »
Alexander Kolesnikov · André Susano Pinto · Lucas Beyer · Xiaohua Zhai · Jeremiah Harmsen · Neil Houlsby -
2022 Poster: Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts »
Basil Mustafa · Carlos Riquelme · Joan Puigcerver · Rodolphe Jenatton · Neil Houlsby -
2021 Poster: MLP-Mixer: An all-MLP Architecture for Vision »
Ilya Tolstikhin · Neil Houlsby · Alexander Kolesnikov · Lucas Beyer · Xiaohua Zhai · Thomas Unterthiner · Jessica Yung · Andreas Steiner · Daniel Keysers · Jakob Uszkoreit · Mario Lucic · Alexey Dosovitskiy -
2021 Poster: Revisiting the Calibration of Modern Neural Networks »
Matthias Minderer · Josip Djolonga · Rob Romijnders · Frances Hubis · Xiaohua Zhai · Neil Houlsby · Dustin Tran · Mario Lucic -
2020 Poster: What Do Neural Networks Learn When Trained With Random Labels? »
Hartmut Maennel · Ibrahim Alabdulmohsin · Ilya Tolstikhin · Robert Baldock · Olivier Bousquet · Sylvain Gelly · Daniel Keysers -
2020 Spotlight: What Do Neural Networks Learn When Trained With Random Labels? »
Hartmut Maennel · Ibrahim Alabdulmohsin · Ilya Tolstikhin · Robert Baldock · Olivier Bousquet · Sylvain Gelly · Daniel Keysers -
2019 : Disentanglement Challenge - Disentanglement and Results of the Challenge Stages 1 & 2 »
Djordje Miladinovic · Stefan Bauer · Daniel Keysers -
2019 Poster: Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates »
Carlos Riquelme · Hugo Penedones · Damien Vincent · Hartmut Maennel · Sylvain Gelly · Timothy A Mann · Andre Barreto · Gergely Neu -
2019 Poster: Practical and Consistent Estimation of f-Divergences »
Paul Rubenstein · Olivier Bousquet · Josip Djolonga · Carlos Riquelme · Ilya Tolstikhin