Timezone: »
Vision Transformers (ViT) have achieved remarkable success in large-scale image recognition. They split every 2D image into a fixed number of patches, each of which is treated as a token. Generally, representing an image with more tokens would lead to higher prediction accuracy, while it also results in drastically increased computational cost. To achieve a decent trade-off between accuracy and speed, the number of tokens is empirically set to 16x16 or 14x14. In this paper, we argue that every image has its own characteristics, and ideally the token number should be conditioned on each individual input. In fact, we have observed that there exist a considerable number of “easy” images which can be accurately predicted with a mere number of 4x4 tokens, while only a small fraction of “hard” ones need a finer representation. Inspired by this phenomenon, we propose a Dynamic Transformer to automatically configure a proper number of tokens for each input image. This is achieved by cascading multiple Transformers with increasing numbers of tokens, which are sequentially activated in an adaptive fashion at test time, i.e., the inference is terminated once a sufficiently confident prediction is produced. We further design efficient feature reuse and relationship reuse mechanisms across different components of the Dynamic Transformer to reduce redundant computations. Extensive empirical results on ImageNet, CIFAR-10, and CIFAR-100 demonstrate that our method significantly outperforms the competitive baselines in terms of both theoretical computational efficiency and practical inference speed. Code and pre-trained models (based on PyTorch and MindSpore) are available at https://github.com/blackfeather-wang/Dynamic-Vision-Transformer and https://github.com/blackfeather-wang/Dynamic-Vision-Transformer-MindSpore.
Author Information
Yulin Wang (Tsinghua University)
Rui Huang (Tsinghua University, Tsinghua University)
Shiji Song (Department of Automation, Tsinghua University)
Zeyi Huang (Huawei Technologies Ltd.)
Gao Huang (Tsinghua)
More from the Same Authors
-
2021 Spotlight: Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning »
Yiqin Yang · Xiaoteng Ma · Chenghao Li · Zewu Zheng · Qiyuan Zhang · Gao Huang · Jun Yang · Qianchuan Zhao -
2022 Poster: Contrastive Language-Image Pre-Training with Knowledge Graphs »
Xuran Pan · Tianzhu Ye · Dongchen Han · Shiji Song · Gao Huang -
2022 Poster: Efficient Knowledge Distillation from Model Checkpoints »
Chaofei Wang · Qisen Yang · Rui Huang · Shiji Song · Gao Huang -
2022 Spotlight: Lightning Talks 1B-3 »
Chaofei Wang · Qixun Wang · Jing Xu · Long-Kai Huang · Xi Weng · Fei Ye · Harsh Rangwani · shrinivas ramasubramanian · Yifei Wang · Qisen Yang · Xu Luo · Lei Huang · Adrian G. Bors · Ying Wei · Xinglin Pan · Sho Takemori · Hong Zhu · Rui Huang · Lei Zhao · Yisen Wang · Kato Takashi · Shiji Song · Yanan Li · Rao Anwer · Yuhei Umeda · Salman Khan · Gao Huang · Wenjie Pei · Fahad Shahbaz Khan · Venkatesh Babu R · Zenglin Xu -
2022 Spotlight: Efficient Knowledge Distillation from Model Checkpoints »
Chaofei Wang · Qisen Yang · Rui Huang · Shiji Song · Gao Huang -
2022 Poster: Latency-aware Spatial-wise Dynamic Networks »
Yizeng Han · Zhihang Yuan · Yifan Pu · Chenhao Xue · Shiji Song · Guangyu Sun · Gao Huang -
2021 Poster: Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning »
Yiqin Yang · Xiaoteng Ma · Chenghao Li · Zewu Zheng · Qiyuan Zhang · Gao Huang · Jun Yang · Qianchuan Zhao -
2021 Poster: Searching Parameterized AP Loss for Object Detection »
Tao Chenxin · Zizhang Li · Xizhou Zhu · Gao Huang · Yong Liu · jifeng dai -
2020 Poster: Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification »
Yulin Wang · Kangchen Lv · Rui Huang · Shiji Song · Le Yang · Gao Huang -
2019 Poster: Regularized Anderson Acceleration for Off-Policy Deep Reinforcement Learning »
Wenjie Shi · Shiji Song · Hui Wu · Ya-Chu Hsu · Cheng Wu · Gao Huang -
2019 Poster: Implicit Semantic Data Augmentation for Deep Networks »
Yulin Wang · Xuran Pan · Shiji Song · Hong Zhang · Gao Huang · Cheng Wu -
2019 Poster: Asymmetric Valleys: Beyond Sharp and Flat Local Minima »
Haowei He · Gao Huang · Yang Yuan -
2019 Spotlight: Asymmetric Valleys: Beyond Sharp and Flat Local Minima »
Haowei He · Gao Huang · Yang Yuan