Timezone: »
Vision Transformer has shown great visual representation power in substantial vision tasks such as recognition and detection, and thus been attracting fast-growing efforts on manually designing more effective architectures. In this paper, we propose to use neural architecture search to automate this process, by searching not only the architecture but also the search space. The central idea is to gradually evolve different search dimensions guided by their E-T Error computed using a weight-sharing supernet. Moreover, we provide design guidelines of general vision transformers with extensive analysis according to the space searching process, which could promote the understanding of vision transformer. Remarkably, the searched models, named S3 (short for Searching the Search Space), from the searched space achieve superior performance to recently proposed models, such as Swin, DeiT and ViT, when evaluated on ImageNet. The effectiveness of S3 is also illustrated on object detection, semantic segmentation and visual question answering, demonstrating its generality to downstream vision and vision-language tasks. Code and models will be available at https://github.com/microsoft/Cream.
Author Information
Minghao Chen (Stony Brook University)
Kan Wu (Sun Yat-sen University)
Bolin Ni (Institute of automation, Chinese academy of science, Chinese Academy of Sciences)
Houwen Peng (Microsoft Research)
Bei Liu (Microsoft Research Asia)
Jianlong Fu (Microsoft Research)
Hongyang Chao
Haibin Ling (State University of New York, Stony Brook)
More from the Same Authors
-
2022 Poster: Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive Learning »
Yuchong Sun · Hongwei Xue · Ruihua Song · Bei Liu · Huan Yang · Jianlong Fu -
2023 Poster: ImageBrush: Learning Visual In-Context Instructions for Exemplar-Based Image Manipulation »
ya sheng sun · Yifan Yang · Houwen Peng · Yifei Shen · Yuqing Yang · Han Hu · Lili Qiu · Hideki Koike -
2022 Poster: PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies »
Guocheng Qian · Yuchen Li · Houwen Peng · Jinjie Mai · Hasan Hammoud · Mohamed Elhoseiny · Bernard Ghanem -
2022 Poster: SwinTrack: A Simple and Strong Baseline for Transformer Tracking »
Liting Lin · Heng Fan · Zhipeng Zhang · Yong Xu · Haibin Ling -
2021 Poster: Improving Visual Quality of Image Synthesis by A Token-based Generator with Transformers »
Yanhong Zeng · Huan Yang · Hongyang Chao · Jianbo Wang · Jianlong Fu -
2021 Poster: Probing Inter-modality: Visual Parsing with Self-Attention for Vision-and-Language Pre-training »
Hongwei Xue · Yupan Huang · Bei Liu · Houwen Peng · Jianlong Fu · Houqiang Li · Jiebo Luo -
2020 Poster: Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search »
Houwen Peng · Hao Du · Hongyuan Yu · QI LI · Jing Liao · Jianlong Fu -
2020 Poster: Learning Semantic-aware Normalization for Generative Adversarial Networks »
Heliang Zheng · Jianlong Fu · Yanhong Zeng · Jiebo Luo · Zheng-Jun Zha -
2020 Spotlight: Learning Semantic-aware Normalization for Generative Adversarial Networks »
Heliang Zheng · Jianlong Fu · Yanhong Zeng · Jiebo Luo · Zheng-Jun Zha -
2019 Poster: Learning Deep Bilinear Transformation for Fine-grained Image Representation »
Heliang Zheng · Jianlong Fu · Zheng-Jun Zha · Jiebo Luo