Timezone: »
Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary. We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (i.e. "mixing" the per-location features), and one with MLPs applied across patches (i.e. "mixing" spatial information). When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers.
Author Information
Ilya Tolstikhin (Google, Brain Team, Zurich)
Neil Houlsby (Google)
Alexander Kolesnikov (Google)
Lucas Beyer (Google Brain Zürich)
Xiaohua Zhai (Google Brain)
Thomas Unterthiner (Google Research)
Jessica Yung (Google)
Andreas Steiner (Google)
- Education : medical doctor, MSc in bio-electronics - MD work computer-assisted diagnostics for tuberculosis screening in Tanzanian prisons - Developed next generation sequencing variant calling software and tools for epidemiological studies using hand-held devices - Software Engineer with Google since 2015, working on Shopping, ML, and ML productionization
Daniel Keysers (Google Research, Brain Team)
Jakob Uszkoreit (Google)
Mario Lucic (Google Brain)
Alexey Dosovitskiy (Inceptive)
More from the Same Authors
-
2021 : A Unified Few-Shot Classification Benchmark to Compare Transfer and Meta Learning Approaches »
Vincent Dumoulin · Neil Houlsby · Utku Evci · Xiaohua Zhai · Ross Goroshin · Sylvain Gelly · Hugo Larochelle -
2023 Poster: Scaling Open-Vocabulary Object Detection »
Matthias Minderer · Alexey Gritsenko · Neil Houlsby -
2023 Poster: Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design »
Ibrahim Alabdulmohsin · Lucas Beyer · Alexander Kolesnikov · Xiaohua Zhai -
2023 Poster: Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution »
Mostafa Dehghani · Basil Mustafa · Josip Djolonga · Jonathan Heek · Matthias Minderer · Mathilde Caron · Andreas Steiner · Joan Puigcerver · Robert Geirhos · Ibrahim Alabdulmohsin · Avital Oliver · Piotr Padlewski · Alexey Gritsenko · Mario Lucic · Neil Houlsby -
2023 Poster: Three Towers: Flexible Contrastive Learning with Pretrained Image Models »
Jannik Kossen · Mark Collier · Basil Mustafa · Xiao Wang · Xiaohua Zhai · Lucas Beyer · Andreas Steiner · Jesse Berent · Rodolphe Jenatton · Effrosyni Kokiopoulou -
2023 Poster: Image Captioners Are Scalable Vision Learners Too »
Michael Tschannen · Manoj Kumar · Andreas Steiner · Xiaohua Zhai · Neil Houlsby · Lucas Beyer -
2023 Oral: Image Captioners Are Scalable Vision Learners Too »
Michael Tschannen · Manoj Kumar · Andreas Steiner · Xiaohua Zhai · Neil Houlsby · Lucas Beyer -
2022 : Panel »
Erin Grant · Richard Turner · Neil Houlsby · Priyanka Agrawal · Abhijeet Awasthi · Salomey Osei -
2022 Poster: VCT: A Video Compression Transformer »
Fabian Mentzer · George D Toderici · David Minnen · Sergi Caelles · Sung Jin Hwang · Mario Lucic · Eirikur Agustsson -
2022 Poster: UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes »
Alexander Kolesnikov · André Susano Pinto · Lucas Beyer · Xiaohua Zhai · Jeremiah Harmsen · Neil Houlsby -
2022 Poster: Object Scene Representation Transformer »
Mehdi S. M. Sajjadi · Daniel Duckworth · Aravindh Mahendran · Sjoerd van Steenkiste · Filip Pavetic · Mario Lucic · Leonidas Guibas · Klaus Greff · Thomas Kipf -
2022 Poster: Revisiting Neural Scaling Laws in Language and Vision »
Ibrahim Alabdulmohsin · Behnam Neyshabur · Xiaohua Zhai -
2022 Poster: Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts »
Basil Mustafa · Carlos Riquelme · Joan Puigcerver · Rodolphe Jenatton · Neil Houlsby -
2021 Workshop: ImageNet: Past, Present, and Future »
Zeynep Akata · Lucas Beyer · Sanghyuk Chun · A. Sophia Koepke · Diane Larlus · Seong Joon Oh · Rafael Rezende · Sangdoo Yun · Xiaohua Zhai -
2021 Poster: A Near-Optimal Algorithm for Debiasing Trained Machine Learning Models »
Ibrahim Alabdulmohsin · Mario Lucic -
2021 Poster: Scaling Vision with Sparse Mixture of Experts »
Carlos Riquelme · Joan Puigcerver · Basil Mustafa · Maxim Neumann · Rodolphe Jenatton · André Susano Pinto · Daniel Keysers · Neil Houlsby -
2021 Poster: Revisiting the Calibration of Modern Neural Networks »
Matthias Minderer · Josip Djolonga · Rob Romijnders · Frances Hubis · Xiaohua Zhai · Neil Houlsby · Dustin Tran · Mario Lucic -
2021 Poster: Do Vision Transformers See Like Convolutional Neural Networks? »
Maithra Raghu · Thomas Unterthiner · Simon Kornblith · Chiyuan Zhang · Alexey Dosovitskiy -
2020 Poster: Object-Centric Learning with Slot Attention »
Francesco Locatello · Dirk Weissenborn · Thomas Unterthiner · Aravindh Mahendran · Georg Heigold · Jakob Uszkoreit · Alexey Dosovitskiy · Thomas Kipf -
2020 Spotlight: Object-Centric Learning with Slot Attention »
Francesco Locatello · Dirk Weissenborn · Thomas Unterthiner · Aravindh Mahendran · Georg Heigold · Jakob Uszkoreit · Alexey Dosovitskiy · Thomas Kipf -
2020 Poster: What Do Neural Networks Learn When Trained With Random Labels? »
Hartmut Maennel · Ibrahim Alabdulmohsin · Ilya Tolstikhin · Robert Baldock · Olivier Bousquet · Sylvain Gelly · Daniel Keysers -
2020 Spotlight: What Do Neural Networks Learn When Trained With Random Labels? »
Hartmut Maennel · Ibrahim Alabdulmohsin · Ilya Tolstikhin · Robert Baldock · Olivier Bousquet · Sylvain Gelly · Daniel Keysers -
2020 Session: Orals & Spotlights Track 08: Deep Learning »
Graham Taylor · Mario Lucic -
2019 : Disentanglement Challenge - Disentanglement and Results of the Challenge Stages 1 & 2 »
Djordje Miladinovic · Stefan Bauer · Daniel Keysers -
2019 Poster: Practical and Consistent Estimation of f-Divergences »
Paul Rubenstein · Olivier Bousquet · Josip Djolonga · Carlos Riquelme · Ilya Tolstikhin -
2018 Poster: Deep Generative Models for Distribution-Preserving Lossy Compression »
Michael Tschannen · Eirikur Agustsson · Mario Lucic -
2018 Poster: Assessing Generative Models via Precision and Recall »
Mehdi S. M. Sajjadi · Olivier Bachem · Mario Lucic · Olivier Bousquet · Sylvain Gelly -
2018 Poster: Unsupervised Learning of Shape and Pose with Differentiable Point Clouds »
Eldar Insafutdinov · Alexey Dosovitskiy -
2018 Poster: Are GANs Created Equal? A Large-Scale Study »
Mario Lucic · Karol Kurach · Marcin Michalski · Sylvain Gelly · Olivier Bousquet -
2017 : Self-Normalizing Neural Networks »
Thomas Unterthiner -
2017 : Poster session 1 »
Van-Doan Nguyen · Stephan Eismann · Haozhen Wu · Garrett Goh · Kristina Preuer · Thomas Unterthiner · Matthew Ragoza · Tien-Lam PHAM · Günter Klambauer · Andrea Rocchetto · Maxwell Hutchinson · Qian Yang · Rafael Gomez-Bombarelli · Sheshera Mysore · Brooke Husic · Ryan-Rhys Griffiths · Masashi Tsubaki · Emma Strubell · Philippe Schwaller · Théophile Gaudin · Michael Brenner · Li Li -
2017 Spotlight: Self-Normalizing Neural Networks »
Günter Klambauer · Thomas Unterthiner · Andreas Mayr · Sepp Hochreiter -
2017 Poster: Self-Normalizing Neural Networks »
Günter Klambauer · Thomas Unterthiner · Andreas Mayr · Sepp Hochreiter -
2017 Poster: GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium »
Martin Heusel · Hubert Ramsauer · Thomas Unterthiner · Bernhard Nessler · Sepp Hochreiter -
2017 Poster: AdaGAN: Boosting Generative Models »
Ilya Tolstikhin · Sylvain Gelly · Olivier Bousquet · Carl-Johann SIMON-GABRIEL · Bernhard Schölkopf -
2016 Poster: Minimax Estimation of Maximum Mean Discrepancy with Radial Kernels »
Ilya Tolstikhin · Bharath Sriperumbudur · Bernhard Schölkopf -
2016 Poster: Consistent Kernel Mean Estimation for Functions of Random Variables »
Carl-Johann Simon-Gabriel · Adam Scibior · Ilya Tolstikhin · Bernhard Schölkopf -
2015 Poster: Rectified Factor Networks »
Djork-Arné Clevert · Andreas Mayr · Thomas Unterthiner · Sepp Hochreiter -
2014 Workshop: Second Workshop on Transfer and Multi-Task Learning: Theory meets Practice »
Urun Dogan · Tatiana Tommasi · Yoshua Bengio · Francesco Orabona · Marius Kloft · Andres Munoz · Gunnar Rätsch · Hal Daumé III · Mehryar Mohri · Xuezhi Wang · Daniel Hernández-lobato · Song Liu · Thomas Unterthiner · Pascal Germain · Vinay P Namboodiri · Michael Goetz · Christopher Berlind · Sigurd Spieckermann · Marta Soare · Yujia Li · Vitaly Kuznetsov · Wenzhao Lian · Daniele Calandriello · Emilie Morvant -
2014 Workshop: Representation and Learning Methods for Complex Outputs »
Richard Zemel · Dale Schuurmans · Kilian Q Weinberger · Yuhong Guo · Jia Deng · Francesco Dinuzzo · Hal Daumé III · Honglak Lee · Noah A Smith · Richard Sutton · Jiaqian YU · Vitaly Kuznetsov · Luke Vilnis · Hanchen Xiong · Calvin Murdock · Thomas Unterthiner · Jean-Francis Roy · Martin Renqiang Min · Hichem SAHBI · Fabio Massimo Zanzotto -
2013 Poster: PAC-Bayes-Empirical-Bernstein Inequality »
Ilya Tolstikhin · Yevgeny Seldin -
2013 Spotlight: PAC-Bayes-Empirical-Bernstein Inequality »
Ilya Tolstikhin · Yevgeny Seldin