We introduce collaborative learning in which multiple classifier heads of the same network are simultaneously trained on the same training data to improve generalization and robustness to label noise with no extra inference cost. It acquires the strengths from auxiliary training, multi-task learning and knowledge distillation. There are two important mechanisms involved in collaborative learning. First, the consensus of multiple views from different classifier heads on the same example provides supplementary information as well as regularization to each classifier, thereby improving generalization. Second, intermediate-level representation (ILR) sharing with backpropagation rescaling aggregates the gradient flows from all heads, which not only reduces training computational complexity, but also facilitates supervision to the shared layers. The empirical results on CIFAR and ImageNet datasets demonstrate that deep neural networks learned as a group in a collaborative way significantly reduce the generalization error and increase the robustness to label noise.
Guocong Song (Playground Global)
Wei Chai (Google Inc)
More from the Same Authors
2018 Workshop: 2nd Workshop on Machine Learning on the Phone and other Consumer Devices (MLPCD 2) »
Sujith Ravi · Wei Chai · Yangqing Jia · Hrishikesh Aradhye · Prateek Jain