Timezone: »

Unsupervised Object Detection Pretraining with Joint Object Priors Generation and Detector Learning
Yizhou Wang · Meilin Chen · SHIXIANG TANG · Feng Zhu · Haiyang Yang · LEI BAI · Rui Zhao · Yunfeng Yan · Donglian Qi · Wanli Ouyang


Unsupervised pretraining methods for object detection aim to learn object discrimination and localization ability from large amounts of images. Typically, recent works design pretext tasks that supervise the detector to predict the defined object priors. They normally leverage heuristic methods to produce object priors, \emph{e.g.,} selective search, which separates the prior generation and detector learning and leads to sub-optimal solutions. In this work, we propose a novel object detection pretraining framework that could generate object priors and learn detectors jointly by generating accurate object priors from the model itself. Specifically, region priors are extracted by attention maps from the encoder, which highlights foregrounds. Instance priors are the selected high-quality output bounding boxes of the detection decoder. By assuming objects as instances in the foreground, we can generate object priors with both region and instance priors. Moreover, our object priors are jointly refined along with the detector optimization. With better object priors as supervision, the model could achieve better detection capability, which in turn promotes the object priors generation. Our method improves the competitive approaches by \textbf{+1.3 AP}, \textbf{+1.7 AP} in 1\% and 10\% COCO low-data regimes object detection.

Author Information

Yizhou Wang (Zhejiang University)
Meilin Chen (Zhejiang University)
SHIXIANG TANG (University of Sydney)
Feng Zhu (SenseTime Research)
Haiyang Yang (Nanjing University)
LEI BAI (UNSW, Sydney)
Rui Zhao (Qing Yuan Research Institute, Shanghai Jiao Tong University)
Yunfeng Yan (Zhejiang University)
Donglian Qi (Zhejiang University)
Wanli Ouyang (University of Sydney)

More from the Same Authors