Timezone: »

 
Poster
Discrimination-aware Channel Pruning for Deep Neural Networks
Zhuangwei Zhuang · Mingkui Tan · Bohan Zhuang · Jing Liu · Yong Guo · Qingyao Wu · Junzhou Huang · Jinhui Zhu

Thu Dec 06 07:45 AM -- 09:45 AM (PST) @ Room 210 #87

Channel pruning is one of the predominant approaches for deep model compression. Existing pruning methods either train from scratch with sparsity constraints on channels, or minimize the reconstruction error between the pre-trained feature maps and the compressed ones. Both strategies suffer from some limitations: the former kind is computationally expensive and difficult to converge, whilst the latter kind optimizes the reconstruction error but ignores the discriminative power of channels. To overcome these drawbacks, we investigate a simple-yet-effective method, called discrimination-aware channel pruning, to choose those channels that really contribute to discriminative power. To this end, we introduce additional losses into the network to increase the discriminative power of intermediate layers and then select the most discriminative channels for each layer by considering the additional loss and the reconstruction error. Last, we propose a greedy algorithm to conduct channel selection and parameter optimization in an iterative way. Extensive experiments demonstrate the effectiveness of our method. For example, on ILSVRC-12, our pruned ResNet-50 with 30% reduction of channels even outperforms the original model by 0.39% in top-1 accuracy.

Author Information

Zhuangwei Zhuang (SCUT)
Mingkui Tan (South China University of Technology)
Bohan Zhuang (The University of Adelaide)
Jing Liu (South China University of Technology)
Yong Guo (South China University of Technology)
Qingyao Wu (South China University of Technology)
Junzhou Huang (University of Texas at Arlington / Tencent AI Lab)
Jinhui Zhu (SCUT)

More from the Same Authors