Skip to yearly menu bar Skip to main content


Keynote Talk
in
Workshop: Efficient Natural Language and Speech Processing (Models, Training, and Inference)

From model compression to self-distillation: a review

Samira Ebrahimi Kahou


Abstract:

In this short talk, she presents some of the major milestones in model compression and knowledge distillation, starting with the seminal work of Buciluǎ et al. She also covers applications of knowledge distillation in cross-modal learning, few-shot learning, reinforcement learning and natural language processing.