Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Has it Trained Yet? A Workshop for Algorithmic Efficiency in Practical Neural Network Training

Layover Intermediate Layer for Multi-Label Classification in Efficient Transfer Learning

Seongha Eom · Taehyeon Kim · Se-Young Yun


Abstract:

Transfer Learning (TL) is a promising technique to improve the performance of a target task by transferring the knowledge of models trained on relevant source datasets. With the advent of advanced depth models, various methods of exploiting pre-trained depth models at a large scale have come into the limelight. However, for multi-label classification tasks, TL approaches suffer from performance degradation in correctly predicting multiple objects in an image with significant size differences. Since such a hard instance contains imperceptible objects, most pre-trained models lose their ability during downsampling. For the hard instance, this paper proposes a simple but effective classifier for multiple predictions by using the hidden representations from the fixed backbone. To this end, we mix the pre-logit with the intermediate representation with a learnable scale. We show that our method is effective as fine-tuning with few additional parameters, and is particularly advantageous for hard instances.

Chat is not available.