Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Efficient Natural Language and Speech Processing (Models, Training, and Inference)

Pruning Encoders with a Multitask Objective

Patrick Xia · Richard Shin


Abstract:

Models for natural language processing are increasingly reliant on pretrained language models, which serve as a starting point for downstream applications. However, their large sizes can be prohibitive for use in a single task and makes it even more challenging when there are multiple desired downstream tasks. In this work, we adopt recent strategies for model pruning during finetuning to explore the question of whether it is possible to prune a single encoder so that it can be used for multiple tasks. We allocate a fixed parameter budget and compare pruning a single model with a multitask objective against the best ensemble of single-task models. We find that under two pruning strategies (element-wise and rank pruning), the approach with the multitask objective outperforms training models separately on the combined objective and is competitive on each individual one. Additional analysis finds that using a multitask objective during pruning can also be an effective method for reducing model sizes for tasks with smaller datasets.

Chat is not available.