Timezone: »

Learning Efficient Object Detection Models with Knowledge Distillation
Guobin Chen · Wongun Choi · Xiang Yu · Tony Han · Manmohan Chandraker

Mon Dec 04 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #122

Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much reduced accuracy. In this work, we propose a new framework to learn compact and fast ob- ject detection networks with improved accuracy using knowledge distillation [20] and hint learning [34]. Although knowledge distillation has demonstrated excellent improvements for simpler classification setups, the complexity of detection poses new challenges in the form of regression, region proposals and less voluminous la- bels. We address this through several innovations such as a weighted cross-entropy loss to address class imbalance, a teacher bounded loss to handle the regression component and adaptation layers to better learn from intermediate teacher distribu- tions. We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. Our results show consistent improvement in accuracy-speed trade-offs for modern multi-class detection models.

Author Information

Guobin Chen (University of Missouri)
Wongun Choi (NEC Laboratories)
Xiang Yu (NEC Laboratories America)

I am a researcher at NEC Laboratories America. I am mainly interested in computer vision and machine learning. My current research focuses on object and face recognition, generative models for data synthesis, feature correspondence and landmark localization, and metric learning in disentangling factors of variations for recognition.

Tony Han (University of Missouri)
Manmohan Chandraker (University of California, San Diego)

More from the Same Authors