From model compression to self-distillation: a review
Samira Ebrahimi Kahou
2021 Keynote Talk
in
Workshop: Efficient Natural Language and Speech Processing (Models, Training, and Inference)
in
Workshop: Efficient Natural Language and Speech Processing (Models, Training, and Inference)
Abstract
In this short talk, she presents some of the major milestones in model compression and knowledge distillation, starting with the seminal work of Buciluǎ et al. She also covers applications of knowledge distillation in cross-modal learning, few-shot learning, reinforcement learning and natural language processing.
Video
Chat is not available.
Successful Page Load