Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Second Workshop on Efficient Natural Language and Speech Processing (ENLSP-II)

Towards Data Efficient And Robust Speech Representation Model Distillation

Pheobe Sun · Ruibo Shi · Ahmad Emami · Sean Moran

Keywords: [ ENLSP-Main ]


Abstract:

While knowledge distillation has been proven effective in learning student models of smaller size on various tasks, a large amount of distillation training data is required to keep the performance of the student model competitive to the teacher model. Our research aims to further improve the efficiency in task-agnostic speech representation model pre-training. By perturbing the training data distribution, we distil a more robust task-agnostic speech representation model with a lower training data requirement. By learning representations from both a) the teacher model, which is trained via self-supervised learning (SSL) and b) the known effective hand-crafted features, we effectively regularize and compensate the representation loss due to the distillation process. Our proposed methods are evaluated on a number of downstream tasks and are shown to be effective in certain aspects, which prompts future research that builds on our work to develop efficient task-agnostic speech representation model distillation approaches.

Chat is not available.