Timezone: »

A Transformer-Based Object Detector with Coarse-Fine Crossing Representations
Zhishan Li · Ying Nie · Kai Han · Jianyuan Guo · Lei Xie · Yunhe Wang

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #212

Transformer-based object detectors have shown competitive performance recently. Compared with convolutional neural networks limited by the relatively small receptive fields, the advantage of transformer for visual tasks is the capacity to perceive long-range dependencies among all image patches, while the deficiency is that the local fine-grained information is not fully excavated. In this paper, we introduce the Coarse-grained and Fine-grained crossing representations to build an efficient Detection Transformer (CFDT). Specifically, we propose a local-global cross fusion module to establish the connection between local fine-grained features and global coarse-grained features. Besides, we propose a coarse-fine aware neck which enables detection tokens to interact with both coarse-grained and fine-grained features. Furthermore, an efficient feature integration module is presented for fusing multi-scale representations from different stages. Experimental results on the COCO dataset demonstrate the effectiveness of the proposed method. For instance, our CFDT achieves 48.1 AP with 173G FLOPs, which possesses higher accuracy and less computation compared with the state-of-the-art transformer-based detector ViDT. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/CFDT.

Author Information

Zhishan Li (Zhejiang University)
Ying Nie (Huawei Technologies Ltd.)
Kai Han (Huawei Noah's Ark Lab)
Jianyuan Guo (University of Sydney)
Lei Xie (Zhejiang University)
Yunhe Wang (Huawei Noah's Ark Lab)

More from the Same Authors