Timezone: »
Recently, there emerges a series of vision Transformers, which show superior performance with a more compact model size than conventional convolutional neural networks, thanks to the strong ability of Transformers to model long-range dependencies. However, the advantages of vision Transformers also come with a price: Self-attention, the core part of Transformer, has a quadratic complexity to the input sequence length. This leads to a dramatic increase of computation and memory cost with the increase of sequence length, thus introducing difficulties when applying Transformers to the vision tasks that require dense predictions based on high-resolution feature maps.In this paper, we propose a new vision Transformer, named Glance-and-Gaze Transformer (GG-Transformer), to address the aforementioned issues. It is motivated by the Glance and Gaze behavior of human beings when recognizing objects in natural scenes, with the ability to efficiently model both long-range dependencies and local context. In GG-Transformer, the Glance and Gaze behavior is realized by two parallel branches: The Glance branch is achieved by performing self-attention on the adaptively-dilated partitions of the input, which leads to a linear complexity while still enjoying a global receptive field; The Gaze branch is implemented by a simple depth-wise convolutional layer, which compensates local image context to the features obtained by the Glance mechanism. We empirically demonstrate our method achieves consistently superior performance over previous state-of-the-art Transformers on various vision tasks and benchmarks.
Author Information
Qihang Yu (Johns Hopkins University)
Yingda Xia (Johns Hopkins University)
Yutong Bai (Johns Hopkins University)
Yongyi Lu
Alan Yuille (JHU)
Wei Shen (Shanghai Jiao Tong University)
More from the Same Authors
-
2021 : Occluded Video Instance Segmentation: Dataset and ICCV 2021 Challenge »
Jiyang Qi · Yan Gao · Yao Hu · Xinggang Wang · Xiaoyu Liu · Xiang Bai · Serge Belongie · Alan Yuille · Philip Torr · Song Bai -
2021 : Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping »
prakhar kaushik · Adam Kortylewski · Alex Gain · Alan Yuille -
2022 : Synthetic Tumors Make AI Segment Tumors Better »
Qixin Hu · Junfei Xiao · Alan Yuille · Zongwei Zhou -
2022 : Assembling Existing Labels from Public Datasets to\\Diagnose Novel Diseases: COVID-19 in Late 2019 »
Zengle Zhu · Mintong Kang · Alan Yuille · Zongwei Zhou -
2022 : Making Your First Choice: To Address Cold Start Problem in Vision Active Learning »
Liangyu Chen · Yutong Bai · Siyu Huang · Yongyi Lu · Bihan Wen · Alan Yuille · Zongwei Zhou -
2023 Poster: ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation »
Shuyang Sun · Weijun Wang · Andrew Howard · Qihang Yu · Philip Torr · Liang-Chieh Chen -
2023 Poster: FC-CLIP: Open-Vocabulary Panoptic Segmentation with a Single Frozen Convolutional CLIP »
Qihang Yu · Ju He · Xueqing Deng · Xiaohui Shen · Liang-Chieh Chen -
2021 Poster: Are Transformers more robust than CNNs? »
Yutong Bai · Jieru Mei · Alan Yuille · Cihang Xie -
2021 Poster: Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D Pose »
Angtian Wang · Shenxiao Mei · Alan Yuille · Adam Kortylewski -
2017 Poster: Label Distribution Learning Forests »
Wei Shen · KAI ZHAO · Yilu Guo · Alan Yuille