Timezone: »

Teach Less, Learn More: On the Undistillable Classes in Knowledge Distillation
Yichen Zhu · Ning Liu · Zhiyuan Xu · Xin Liu · Weibin Meng · Louis Wang · Zhicai Ou · Jian Tang

Tue Dec 06 09:00 AM -- 11:00 AM (PST) @

Knowledge distillation (KD) can effectively compress neural networks by training a smaller network (student) to simulate the behavior of a larger one (teacher). A counter-intuitive observation is that a more expansive teacher does not make a better student, but the reasons for this phenomenon remain unclear. In this paper, we demonstrate that this is directly attributed to the presence of \textit{undistillable classes}: when trained with distillation, the teacher's knowledge of some classes is incomprehensible to the student model. We observe that while KD improves the overall accuracy, it is at the cost of the model becoming inaccurate in these undistillable classes. After establishing their widespread existence in state-of-the-art distillation methods, we illustrate their correlation with the capacity gap between teacher and student models. Finally, we present a simple Teach Less Learn More (TLLM) framework to identify and discard the undistillable classes during training. We validate the effectiveness of our approach on multiple datasets with varying network architectures. In all settings, our proposed method is able to exceed the performance of competitive state-of-the-art techniques.

Author Information

Yichen Zhu (Midea Group)
Ning Liu (Midea)
Zhiyuan Xu (Midea)
Xin Liu (East China Normal University)
Weibin Meng (Computer Science, Tsinghua University, Tsinghua University)
Louis Wang (Midea)
Zhicai Ou (Midea Group)
Jian Tang (DiDi AI Labs, DiDi Chuxing)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors