Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Efficient Natural Language and Speech Processing (Models, Training, and Inference)

A Short Study on Compressing Decoder-Based Language Models

Tianda Li · Yassir El Mesbahi · Ivan Kobyzev · Ahmad Rashid · Atif Mahmud · Nithin Anchuri · Habib Hajimolahoseini · Yang Liu · Mehdi Rezagholizadeh


Abstract:

Pre-trained Language Models (PLMs) have been successful for a wide range of natural language processing (NLP) tasks. The state-of-the-art (SOTA) of PLMs, however, are extremely large to be used on edge devices. As a result, the topic of model compression has attracted increasing attention in the NLP community. Most of the existing works focus on compressing encoder-based models (tiny-BERT, distilBERT, distilRoBERTa, etc), however, to the best of our knowledge, the compression of decoder-based models (such as GPT-2) has not been investigated much. Our paper aims to fill this gap. Specifically, we explore two directions: 1) We employ current SOTA knowledge distillation techniques to improve fine-tuning of DistilGPT-2. 2) We pre-train a compressed GPT-2 model using layer truncation and compare it against distillation-based methods. The training time of our compressed model is significantly less than DistilGPT-2, but it can achieve better performance when fine-tuned on downstream tasks. We also demonstrate the impact of data cleaning on model performance.

Chat is not available.