Timezone: »

 
Poster
Rotated Binary Neural Network
Mingbao Lin · Rongrong Ji · Zihan Xu · Baochang Zhang · Yan Wang · Yongjian Wu · Feiyue Huang · Chia-Wen Lin

Thu Dec 10 09:00 PM -- 11:00 PM (PST) @ Poster Session 6 #1894

Binary Neural Network (BNN) shows its predominance in reducing the complexity of deep neural networks. However, it suffers severe performance degradation. One of the major impediments is the large quantization error between the full-precision weight vector and its binary vector. Previous works focus on compensating for the norm gap while leaving the angular bias hardly touched. In this paper, for the first time, we explore the influence of angular bias on the quantization error and then introduce a Rotated Binary Neural Network (RBNN), which considers the angle alignment between the full-precision weight vector and its binarized version. At the beginning of each training epoch, we propose to rotate the full-precision weight vector to its binary vector to reduce the angular bias. To avoid the high complexity of learning a large rotation matrix, we further introduce a bi-rotation formulation that learns two smaller rotation matrices. In the training stage, we devise an adjustable rotated weight vector for binarization to escape the potential local optimum. Our rotation leads to around 50% weight flips which maximize the information gain. Finally, we propose a training-aware approximation of the sign function for the gradient backward. Experiments on CIFAR-10 and ImageNet demonstrate the superiorities of RBNN over many state-of-the-arts. Our source code, experimental settings, training logs and binary models are available at https://github.com/lmbxmu/RBNN.

Author Information

Mingbao Lin (Xiamen University)
Rongrong Ji (Xiamen University, China)
Zihan Xu (Xiamen University, China)
Baochang Zhang (Beihang University)
Yan Wang (Pinterest)
Yongjian Wu (Tencent Technology (Shanghai) Co.,Ltd)
Feiyue Huang (Tencent)
Chia-Wen Lin (National Tsing Hua University)

More from the Same Authors