Timezone: »
Poster
RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder
Cheng Chi · Fangyun Wei · Han Hu
Existing object detection frameworks are usually built on a single format of object/part representation, i.e., anchor/proposal rectangle boxes in RetinaNet and Faster R-CNN, center points in FCOS and RepPoints, and corner points in CornerNet. While these different representations usually drive the frameworks to perform well in different aspects, e.g., better classification or finer localization, it is in general difficult to combine these representations in a single framework to make good use of each strength, due to the heterogeneous or non-grid feature extraction by different representations. This paper presents an attention-based decoder module similar as that in Transformer~\cite{vaswani2017attention} to bridge other representations into a typical object detector built on a single representation format, in an end-to-end fashion. The other representations act as a set of \emph{key} instances to strengthen the main \emph{query} representation features in the vanilla detectors. Novel techniques are proposed towards efficient computation of the decoder module, including a \emph{key sampling} approach and a \emph{shared location embedding} approach. The proposed module is named \emph{bridging visual representations} (BVR). It can perform in-place and we demonstrate its broad effectiveness in bridging other representations into prevalent object detection frameworks, including RetinaNet, Faster R-CNN, FCOS and ATSS, where about $1.5\sim3.0$ AP improvements are achieved. In particular, we improve a state-of-the-art framework with a strong backbone by about $2.0$ AP, reaching $52.7$ AP on COCO test-dev. The resulting network is named RelationNet++. The code is available at \url{https://github.com/microsoft/RelationNet2}.
Author Information
Cheng Chi (University of Chinese Academy of Sciences)
Fangyun Wei (Microsoft Research Asia)
Han Hu (Microsoft Research Asia)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder »
Thu. Dec 10th 04:20 -- 04:30 AM Room Orals & Spotlights: Vision Applications
More from the Same Authors
-
2021 Spotlight: Aligning Pretraining for Detection via Object-Level Contrastive Learning »
Fangyun Wei · Yue Gao · Zhirong Wu · Han Hu · Stephen Lin -
2021 Spotlight: Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning »
Hanzhe Hu · Fangyun Wei · Han Hu · Qiwei Ye · Jinshi Cui · Liwei Wang -
2021 Spotlight: Bootstrap Your Object Detector via Mixed Training »
Mengde Xu · Zheng Zhang · Fangyun Wei · Yutong Lin · Yue Cao · Stephen Lin · Han Hu · Xiang Bai -
2021 Poster: Aligning Pretraining for Detection via Object-Level Contrastive Learning »
Fangyun Wei · Yue Gao · Zhirong Wu · Han Hu · Stephen Lin -
2021 Poster: Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning »
Hanzhe Hu · Fangyun Wei · Han Hu · Qiwei Ye · Jinshi Cui · Liwei Wang -
2021 Poster: Bootstrap Your Object Detector via Mixed Training »
Mengde Xu · Zheng Zhang · Fangyun Wei · Yutong Lin · Yue Cao · Stephen Lin · Han Hu · Xiang Bai -
2020 Poster: RepPoints v2: Verification Meets Regression for Object Detection »
Yihong Chen · Zheng Zhang · Yue Cao · Liwei Wang · Stephen Lin · Han Hu -
2020 Poster: Parametric Instance Classification for Unsupervised Visual Feature learning »
Yue Cao · Zhenda Xie · Bin Liu · Yutong Lin · Zheng Zhang · Han Hu -
2020 Poster: Restoring Negative Information in Few-Shot Object Detection »
Yukuan Yang · Fangyun Wei · Miaojing Shi · Guoqi Li