Timezone: »
Knowledge distillation constitutes a simple yet effective way to improve the performance of a compact student network by exploiting the knowledge of a more powerful teacher. Nevertheless, the knowledge distillation literature remains limited to the scenario where the student and the teacher tackle the same task. Here, we investigate the problem of transferring knowledge not only across architectures but also across tasks. To this end, we study the case of object detection and, instead of following the standard detector-to-detector distillation approach, introduce a classifier-to-detector knowledge transfer framework. In particular, we propose strategies to exploit the classification teacher to improve both the detector's recognition accuracy and localization performance. Our experiments on several detectors with different backbones demonstrate the effectiveness of our approach, allowing us to outperform the state-of-the-art detector-to-detector distillation methods.
Author Information
Shuxuan Guo (EPFL)
Jose M. Alvarez (NVIDIA)
Mathieu Salzmann (EPFL)
More from the Same Authors
-
2021 : SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation »
Robin Chan · Krzysztof Lis · Svenja Uhlemeyer · Hermann Blum · Sina Honari · Roland Siegwart · Pascal Fua · Mathieu Salzmann · Matthias Rottmann -
2022 Poster: Structural Pruning via Latency-Saliency Knapsack »
Maying Shen · Hongxu Yin · Pavlo Molchanov · Lei Mao · Jianna Liu · Jose M. Alvarez -
2022 Poster: Optimizing Data Collection for Machine Learning »
Rafid Mahmood · James Lucas · Jose M. Alvarez · Sanja Fidler · Marc Law -
2022 Poster: Contact-aware Human Motion Forecasting »
Wei Mao · miaomiao Liu · Richard I Hartley · Mathieu Salzmann -
2022 Spotlight: Lightning Talks 6B-2 »
Alexander Korotin · Jinyuan Jia · Weijian Deng · Shi Feng · Maying Shen · Denizalp Goktas · Fang-Yi Yu · Alexander Kolesov · Sadie Zhao · Stephen Gould · Hongxu Yin · Wenjie Qu · Liang Zheng · Evgeny Burnaev · Amy Greenwald · Neil Gong · Pavlo Molchanov · Yiling Chen · Lei Mao · Jianna Liu · Jose M. Alvarez -
2022 Spotlight: Structural Pruning via Latency-Saliency Knapsack »
Maying Shen · Hongxu Yin · Pavlo Molchanov · Lei Mao · Jianna Liu · Jose M. Alvarez -
2022 Spotlight: Lightning Talks 4B-3 »
Zicheng Zhang · Mancheng Meng · Antoine Guedon · Yue Wu · Wei Mao · Zaiyu Huang · Peihao Chen · Shizhe Chen · yongwei chen · Keqiang Sun · Yi Zhu · chen rui · Hanhui Li · Dongyu Ji · Ziyan Wu · miaomiao Liu · Pascal Monasse · Yu Deng · Shangzhe Wu · Pierre-Louis Guhur · Jiaolong Yang · Kunyang Lin · Makarand Tapaswi · Zhaoyang Huang · Terrence Chen · Jiabao Lei · Jianzhuang Liu · Vincent Lepetit · Zhenyu Xie · Richard I Hartley · Dinggang Shen · Xiaodan Liang · Runhao Zeng · Cordelia Schmid · Michael Kampffmeyer · Mathieu Salzmann · Ning Zhang · Fangyun Wei · Yabin Zhang · Fan Yang · Qifeng Chen · Wei Ke · Quan Wang · Thomas Li · qingling Cai · Kui Jia · Ivan Laptev · Mingkui Tan · Xin Tong · Hongsheng Li · Xiaodan Liang · Chuang Gan -
2022 Spotlight: Contact-aware Human Motion Forecasting »
Wei Mao · miaomiao Liu · Richard I Hartley · Mathieu Salzmann -
2022 Poster: Robust Binary Models by Pruning Randomly-initialized Networks »
Chen Liu · Ziqi Zhao · Sabine Süsstrunk · Mathieu Salzmann -
2021 Poster: Learning Transferable Adversarial Perturbations »
Krishna kanth Nakka · Mathieu Salzmann -
2021 Poster: SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers »
Enze Xie · Wenhai Wang · Zhiding Yu · Anima Anandkumar · Jose M. Alvarez · Ping Luo -
2020 Poster: On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them »
Chen Liu · Mathieu Salzmann · Tao Lin · Ryota Tomioka · Sabine Süsstrunk -
2020 Poster: ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2020 Spotlight: ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2019 Poster: Backpropagation-Friendly Eigendecomposition »
Wei Wang · Zheng Dang · Yinlin Hu · Pascal Fua · Mathieu Salzmann -
2017 Poster: Compression-aware Training of Deep Networks »
Jose Alvarez · Mathieu Salzmann -
2017 Poster: Deep Subspace Clustering Networks »
Pan Ji · Tong Zhang · Hongdong Li · Mathieu Salzmann · Ian Reid -
2016 Poster: Learning the Number of Neurons in Deep Networks »
Jose M. Alvarez · Mathieu Salzmann