Timezone: »

From model compression to self-distillation: a review
Samira Ebrahimi Kahou

In this short talk, she presents some of the major milestones in model compression and knowledge distillation, starting with the seminal work of Buciluǎ et al. She also covers applications of knowledge distillation in cross-modal learning, few-shot learning, reinforcement learning and natural language processing.

Author Information

Samira Ebrahimi Kahou (McGill University)

More from the Same Authors