Timezone: »
Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks. In this work, we show that while Transformers tend to have larger model capacity, their generalization can be worse than convolutional networks due to the lack of the right inductive bias. To effectively combine the strengths from both architectures, we present CoAtNets(pronounced "coat" nets), a family of hybrid models built from two key insights: (1) depthwise Convolution and self-Attention can be naturally unified via simple relative attention; (2) vertically stacking convolution layers and attention layers in a principled way is surprisingly effective in improving generalization, capacity and efficiency. Experiments show that our CoAtNets achieve state-of-the-art performance under different resource constraints across various datasets: Without extra data, CoAtNet achieves 86.0% ImageNet top-1 accuracy; When pre-trained with 13M images from ImageNet-21K, our CoAtNet achieves 88.56% top-1 accuracy, matching ViT-huge pre-trained with 300M images from JFT-300M while using 23x less data; Notably, when we further scale up CoAtNet with JFT-3B, it achieves 90.88% top-1 accuracy on ImageNet, establishing a new state-of-the-art result.
Author Information
Zihang Dai (Google Brain)
Hanxiao Liu (Google Brain)
Quoc V Le (Google)
Mingxing Tan (Google Brain)
More from the Same Authors
-
2021 Spotlight: Combiner: Full Attention Transformer with Sparse Computation Cost »
Hongyu Ren · Hanjun Dai · Zihang Dai · Mengjiao (Sherry) Yang · Jure Leskovec · Dale Schuurmans · Bo Dai -
2022 Poster: Mixture-of-Experts with Expert Choice Routing »
Yanqi Zhou · Tao Lei · Hanxiao Liu · Nan Du · Yanping Huang · Vincent Zhao · Andrew Dai · zhifeng Chen · Quoc V Le · James Laudon -
2022 Poster: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models »
Jason Wei · Xuezhi Wang · Dale Schuurmans · Maarten Bosma · brian ichter · Fei Xia · Ed Chi · Quoc V Le · Denny Zhou -
2022 Poster: TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets »
Chengrun Yang · Gabriel Bender · Hanxiao Liu · Pieter-Jan Kindermans · Madeleine Udell · Yifeng Lu · Quoc V Le · Da Huang -
2021 Poster: Combiner: Full Attention Transformer with Sparse Computation Cost »
Hongyu Ren · Hanjun Dai · Zihang Dai · Mengjiao (Sherry) Yang · Jure Leskovec · Dale Schuurmans · Bo Dai -
2021 Poster: Searching for Efficient Transformers for Language Modeling »
David So · Wojciech Mańke · Hanxiao Liu · Zihang Dai · Noam Shazeer · Quoc V Le -
2021 Poster: Pay Attention to MLPs »
Hanxiao Liu · Zihang Dai · David So · Quoc V Le -
2020 : Panel Discussion & Closing »
Yejin Choi · Alexei Efros · Chelsea Finn · Kristen Grauman · Quoc V Le · Yann LeCun · Ruslan Salakhutdinov · Eric Xing -
2020 Poster: Evolving Normalization-Activation Layers »
Hanxiao Liu · Andy Brock · Karen Simonyan · Quoc V Le -
2020 Spotlight: Evolving Normalization-Activation Layers »
Hanxiao Liu · Andy Brock · Karen Simonyan · Quoc V Le -
2020 Poster: PyGlove: Symbolic Programming for Automated Machine Learning »
Daiyi Peng · Xuanyi Dong · Esteban Real · Mingxing Tan · Yifeng Lu · Gabriel Bender · Hanxiao Liu · Adam Kraft · Chen Liang · Quoc V Le -
2020 Poster: RandAugment: Practical Automated Data Augmentation with a Reduced Search Space »
Ekin Dogus Cubuk · Barret Zoph · Jonathon Shlens · Quoc V Le -
2020 Oral: PyGlove: Symbolic Programming for Automated Machine Learning »
Daiyi Peng · Xuanyi Dong · Esteban Real · Mingxing Tan · Yifeng Lu · Gabriel Bender · Hanxiao Liu · Adam Kraft · Chen Liang · Quoc V Le -
2020 Poster: Rethinking Pre-training and Self-training »
Barret Zoph · Golnaz Ghiasi · Tsung-Yi Lin · Yin Cui · Hanxiao Liu · Ekin Dogus Cubuk · Quoc V Le -
2020 Oral: Rethinking Pre-training and Self-training »
Barret Zoph · Golnaz Ghiasi · Tsung-Yi Lin · Yin Cui · Hanxiao Liu · Ekin Dogus Cubuk · Quoc V Le -
2020 Poster: Unsupervised Data Augmentation for Consistency Training »
Qizhe Xie · Zihang Dai · Eduard Hovy · Thang Luong · Quoc V Le -
2020 Poster: Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing »
Zihang Dai · Guokun Lai · Yiming Yang · Quoc V Le -
2019 Poster: XLNet: Generalized Autoregressive Pretraining for Language Understanding »
Zhilin Yang · Zihang Dai · Yiming Yang · Jaime Carbonell · Russ Salakhutdinov · Quoc V Le -
2019 Oral: XLNet: Generalized Autoregressive Pretraining for Language Understanding »
Zhilin Yang · Zihang Dai · Yiming Yang · Jaime Carbonell · Russ Salakhutdinov · Quoc V Le -
2019 Poster: CondConv: Conditionally Parameterized Convolutions for Efficient Inference »
Brandon Yang · Gabriel Bender · Quoc V Le · Jiquan Ngiam -
2019 Poster: Mixtape: Breaking the Softmax Bottleneck Efficiently »
Zhilin Yang · Thang Luong · Russ Salakhutdinov · Quoc V Le -
2019 Poster: Saccader: Improving Accuracy of Hard Attention Models for Vision »
Gamaleldin Elsayed · Simon Kornblith · Quoc V Le -
2019 Poster: GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism »
Yanping Huang · Youlong Cheng · Ankur Bapna · Orhan Firat · Dehao Chen · Mia Chen · HyoukJoong Lee · Jiquan Ngiam · Quoc V Le · Yonghui Wu · zhifeng Chen -
2019 Poster: High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks »
Ruben Villegas · Arkanath Pathak · Harini Kannan · Dumitru Erhan · Quoc V Le · Honglak Lee -
2019 Poster: Re-examination of the Role of Latent Variables in Sequence Modeling »
Guokun Lai · Zihang Dai · Yiming Yang · Shinjae Yoo -
2018 Poster: Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing »
Chen Liang · Mohammad Norouzi · Jonathan Berant · Quoc V Le · Ni Lao -
2018 Spotlight: Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing »
Chen Liang · Mohammad Norouzi · Jonathan Berant · Quoc V Le · Ni Lao -
2018 Poster: DropBlock: A regularization method for convolutional networks »
Golnaz Ghiasi · Tsung-Yi Lin · Quoc V Le -
2017 Symposium: Metalearning »
Risto Miikkulainen · Quoc V Le · Kenneth Stanley · Chrisantha Fernando -
2017 Poster: Controllable Invariance through Adversarial Feature Learning »
Qizhe Xie · Zihang Dai · Yulun Du · Eduard Hovy · Graham Neubig -
2017 Poster: Good Semi-supervised Learning That Requires a Bad GAN »
Zihang Dai · Zhilin Yang · Fan Yang · William Cohen · Ruslan Salakhutdinov -
2016 Poster: An Online Sequence-to-Sequence Model Using Partial Conditioning »
Navdeep Jaitly · Quoc V Le · Oriol Vinyals · Ilya Sutskever · David Sussillo · Samy Bengio -
2016 Poster: Adaptive Smoothed Online Multi-Task Learning »
Keerthiram Murugesan · Hanxiao Liu · Jaime Carbonell · Yiming Yang -
2015 Poster: Semi-supervised Sequence Learning »
Andrew Dai · Quoc V Le -
2014 Poster: Sequence to Sequence Learning with Neural Networks »
Ilya Sutskever · Oriol Vinyals · Quoc V Le -
2014 Oral: Sequence to Sequence Learning with Neural Networks »
Ilya Sutskever · Oriol Vinyals · Quoc V Le -
2013 Workshop: Randomized Methods for Machine Learning »
David Lopez-Paz · Quoc V Le · Alexander Smola