Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Second Workshop on Efficient Natural Language and Speech Processing (ENLSP-II)

Fast DistilBERT on CPUs

Haihao Shen · Ofir Zafrir · Bo Dong · Hengyu Meng · Xinyu Ye · Zhe Wang · Yi Ding · Hanwen Chang · Guy Boudoukh · Moshe Wasserblat

Keywords: [ ENLSP-Main ] [ Efficient Graphs for NLP ]


Abstract:

Transformer-based language models have become the standard approach to solving natural language processing tasks. However, industry adoption usually requires the maximum throughput to comply with certain latency constraints that prevents Transformer models from being used in production. To address this gap, model compression techniques such as quantization and pruning may be used to improve inference efficiency. However, these compression techniques require specialized software to apply and deploy at scale. In this work, we propose a new pipeline for creating and running Fast Transformer models on CPUs, utilizing hardware-aware pruning, knowledge distillation, quantization, and our own Transformer inference runtime engine with optimized kernels for sparse and quantized operators. We demonstrate the efficiency of our pipeline by creating a Fast DistilBERT model showing minimal accuracy loss on the question-answering SQuADv1.1 benchmark, and throughput results under typical production constraints and environments. Our results outperform existing state-of-the-art Neural Magic's DeepSparse runtime performance by up to 50\% and up to 4.1x performance speedup over ONNX Runtime.

Chat is not available.