Timezone: »
Extreme compression, particularly ultra-low bit precision (binary/ternary) quantization, has been proposed to fit large NLP models on resource-constraint devices. However, to preserve the accuracy for such aggressive compression schemes, cutting-edge methods usually introduce complicated compression pipelines, e.g., multi-stage expensive knowledge distillation with extensive hyperparameter tuning. Also, they oftentimes focus less on smaller transformer models that have already been heavily compressed via knowledge distillation and lack a systematic study to show the effectiveness of their methods.In this paper, we perform a very comprehensive systematic study to measure the impact of many key hyperparameters and training strategies from previous. As a result, we find out that previous baselines for ultra-low bit precision quantization are significantly under-trained. Based on our study, we propose a simple yet effective compression pipeline for extreme compression. Our simplified pipeline demonstrates that(1) we can skip the pre-training knowledge distillation to obtain a 5-layer \bert while achieving better performance than previous state-of-the-art methods, like TinyBERT; (2) extreme quantization plus layer reduction is able to reduce the model size by 50x, resulting in new state-of-the-art results on GLUE tasks.
Author Information
Xiaoxia Wu (Microsoft)
Zhewei Yao (UC Berkeley)
Minjia Zhang (Microsoft)
Conglong Li (Microsoft)
Yuxiong He (Microsoft)
More from the Same Authors
-
2022 Spotlight: ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers »
Zhewei Yao · Reza Yazdani Aminabadi · Minjia Zhang · Xiaoxia Wu · Conglong Li · Yuxiong He -
2022 Spotlight: Lightning Talks 5B-2 »
Conglong Li · Mohammad Azizmalayeri · Mojan Javaheripi · Pratik Vaishnavi · Jon Hasselgren · Hao Lu · Kevin Eykholt · Arshia Soltani Moakhar · Wenze Liu · Gustavo de Rosa · Nikolai Hofmann · Minjia Zhang · Zixuan Ye · Jacob Munkberg · Amir Rahmati · Arman Zarei · Subhabrata Mukherjee · Yuxiong He · Shital Shah · Reihaneh Zohrabi · Hongtao Fu · Tomasz Religa · Yuliang Liu · Mohammad Manzuri · Mohammad Hossein Rohban · Zhiguo Cao · Caio Cesar Teodoro Mendes · Sebastien Bubeck · Farinaz Koushanfar · Debadeepta Dey -
2022 Spotlight: The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models »
Conglong Li · Minjia Zhang · Yuxiong He -
2022 Panel: Panel 2B-4: Extreme Compression for… & Exploring Length Generalization… »
Cem Anil · Minjia Zhang -
2022 Poster: ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers »
Zhewei Yao · Reza Yazdani Aminabadi · Minjia Zhang · Xiaoxia Wu · Conglong Li · Yuxiong He -
2022 Poster: The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models »
Conglong Li · Minjia Zhang · Yuxiong He -
2021 Poster: SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement »
Heyang Qin · Samyam Rajbhandari · Olatunji Ruwase · Feng Yan · Lei Yang · Yuxiong He -
2021 Poster: NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM »
Connor Holmes · Minjia Zhang · Yuxiong He · Bo Wu -
2020 Poster: HM-ANN: Efficient Billion-Point Nearest Neighbor Search on Heterogeneous Memory »
Jie Ren · Minjia Zhang · Dong Li -
2020 Poster: Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping »
Minjia Zhang · Yuxiong He -
2020 Poster: AdaTune: Adaptive Tensor Program Compilation Made Efficient »
Menghao Li · Minjia Zhang · Chi Wang · Mingqin Li -
2020 Poster: HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks »
Zhen Dong · Zhewei Yao · Daiyaan Arfeen · Amir Gholami · Michael Mahoney · Kurt Keutzer -
2020 Poster: A Statistical Framework for Low-bitwidth Training of Deep Neural Networks »
Jianfei Chen · Yu Gai · Zhewei Yao · Michael Mahoney · Joseph Gonzalez -
2019 : Spotlight talks »
Paul Grigas · Zhewei Yao · Aurelien Lucchi · Si Yi Meng -
2019 : Poster Session »
Eduard Gorbunov · Alexandre d'Aspremont · Lingxiao Wang · Liwei Wang · Boris Ginsburg · Alessio Quaglino · Camille Castera · Saurabh Adya · Diego Granziol · Rudrajit Das · Raghu Bollapragada · Fabian Pedregosa · Martin Takac · Majid Jahani · Sai Praneeth Karimireddy · Hilal Asi · Balint Daroczy · Leonard Adolphs · Aditya Rawal · Nicolas Brandt · Minhan Li · Giuseppe Ughi · Orlando Romero · Ivan Skorokhodov · Damien Scieur · Kiwook Bae · Konstantin Mishchenko · Rohan Anil · Vatsal Sharan · Aditya Balu · Chao Chen · Zhewei Yao · Tolga Ergen · Paul Grigas · Chris Junchi Li · Jimmy Ba · Stephen J Roberts · Sharan Vaswani · Armin Eftekhari · Chhavi Sharma -
2019 Poster: ANODEV2: A Coupled Neural ODE Framework »
Tianjun Zhang · Zhewei Yao · Amir Gholami · Joseph Gonzalez · Kurt Keutzer · Michael Mahoney · George Biros -
2018 Poster: Hessian-based Analysis of Large Batch Training and Robustness to Adversaries »
Zhewei Yao · Amir Gholami · Qi Lei · Kurt Keutzer · Michael Mahoney -
2018 Poster: Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models »
Minjia Zhang · Wenhan Wang · Xiaodong Liu · Jianfeng Gao · Yuxiong He