Timezone: »
Deep convolutional neural networks (CNNs) are powerful tools for a wide range of vision tasks, but the enormous amount of memory and compute resources required by CNNs poses a challenge in deploying them on constrained devices. Existing compression techniques, while excelling at reducing model sizes, struggle to be computationally friendly. In this paper, we attend to the statistical properties of sparse CNNs and present focused quantization, a novel quantization strategy based on power-of-two values, which exploits the weight distributions after fine-grained pruning. The proposed method dynamically discovers the most effective numerical representation for weights in layers with varying sparsities, significantly reducing model sizes. Multiplications in quantized CNNs are replaced with much cheaper bit-shift operations for efficient inference. Coupled with lossless encoding, we build a compression pipeline that provides CNNs with high compression ratios (CR), low computation cost and minimal loss in accuracies. In ResNet-50, we achieved a 18.08x CR with only 0.24% loss in top-5 accuracy, outperforming existing compression methods. We fully compress a ResNet-18 and found that it is not only higher in CR and top-5 accuracy, but also more hardware efficient as it requires fewer logic gates to implement when compared to other state-of-the-art quantization methods assuming the same throughput.
Author Information
Yiren Zhao (University of Cambridge)
Xitong Gao (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences)
Daniel Bates (University of Cambridge)
Robert Mullins (University of Cambridge)
Cheng-Zhong Xu (University of Macau)
More from the Same Authors
-
2021 : DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning »
Robert Hönig · Yiren Zhao · Robert Mullins -
2022 : Dynamic Head Pruning in Transformers »
Prisha Satwani · yiren zhao · Vidhi Lalchand · Robert Mullins -
2022 : Revisiting Graph Neural Network Embeddings »
Skye Purchase · yiren zhao · Robert Mullins -
2022 : Wide Attention Is The Way Forward For Transformers »
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins -
2022 : SMILE: Sample-to-feature MIxup for Efficient Transfer LEarning »
Xingjian Li · Haoyi Xiong · Cheng-Zhong Xu · Dejing Dou -
2022 : DARTFormer: Finding The Best Type Of Attention »
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins -
2023 Poster: MiliPoint: A Point Cloud Dataset for mmWave Radar »
Han Cui · Shu Zhong · Jiacheng Wu · Zichao Shen · Naim Dahnoun · Yiren Zhao -
2022 : Wide Attention Is The Way Forward For Transformers »
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins -
2022 Poster: Rapid Model Architecture Adaption for Meta-Learning »
Yiren Zhao · Xitong Gao · I Shumailov · Nicolo Fusi · Robert Mullins -
2022 Poster: MORA: Improving Ensemble Robustness Evaluation with Model Reweighing Attack »
yunrui yu · Xitong Gao · Cheng-Zhong Xu -
2021 Poster: Manipulating SGD with Data Ordering Attacks »
I Shumailov · Zakhar Shumaylov · Dmitry Kazhdan · Yiren Zhao · Nicolas Papernot · Murat Erdogdu · Ross J Anderson