Timezone: »

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark
Yuhang Li · Mingzhu Shen · Jian Ma · Yan Ren · Mingxin Zhao · Qi Zhang · Ruihao Gong · Fengwei Yu · Junjie Yan

Model quantization has emerged as an indispensable technique to accelerate deep learning inference. Although researchers continue to push the frontier of quantization algorithms, existing quantization work is often unreproducible and undeployable. This is because researchers do not choose consistent training pipelines and ignore the requirements for hardware deployments. In this work, we propose Model Quantization Benchmark (MQBench), a first attempt to evaluate, analyze, and benchmark the reproducibility and deployability for model quantization algorithms. We choose multiple different platforms for real-world deployments, including CPU, GPU, ASIC, DSP, and evaluate extensive state-of-the-art quantization algorithms under a unified training pipeline. MQBench acts like a bridge to connect the algorithm and the hardware. We conduct a comprehensive analysis and find considerable intuitive or counter-intuitive insights. By aligning up the training settings, we find existing algorithms have about-the-same performance on the conventional academic track. While for the hardware-deployable quantization, there is a huge accuracy gap and still a long way to go. Surprisingly, no existing algorithm wins every challenge in MQBench, and we hope this work could inspire future research directions.

Author Information

Yuhang Li (Yale University)
Mingzhu Shen (Sensetime Research)
Jian Ma (Xi'an Jiaotong University)
Yan Ren (Xidian University)
Mingxin Zhao
Qi Zhang (Beihang University)
Ruihao Gong (Beihang University)
Fengwei Yu (Beihang University)
Junjie Yan (Sensetime Group Limited)

More from the Same Authors