Skip to yearly menu bar Skip to main content


Poster
in
Workshop: INTERPOLATE — First Workshop on Interpolation Regularizers and Beyond

SMILE: Sample-to-feature MIxup for Efficient Transfer LEarning

Xingjian Li · Haoyi Xiong · Cheng-Zhong Xu · Dejing Dou

Keywords: [ mixup ] [ interpolation ] [ Regularization ] [ transfer learning ]


Abstract:

To improve the performance of deep learning, mixup has been proposed to force the neural networks favoring simple linear behaviors in-between training samples. Performing mixup for transfer learning with pre-trained models however is not that simple,  a high capacity pre-trained model with a large fully-connected (FC) layer could easily overfit to the target dataset even with samples-to-labels mixed up. In this work, we propose SMILE — \underline{S}ample-to-feature \underline{M}ixup for Eff\underline{I}cient Transfer \underline{LE}arning. With mixed images as inputs, SMILE regularizes the outputs of CNN feature extractors to learn from the mixed feature vectors of inputs, in addition to the mixed labels. SMILE incorporates a mean teacher to provide the surrogate "ground truth" for mixed feature vectors. Extensive experiments have been done to verify the performance improvement made by \TheName, in comparisons with a wide spectrum of transfer learning algorithms, including fine-tuning, L2-SP, DELTA, BSS, RIFLE, Co-Tuning and RegSL, even with mixup strategies combined.

Chat is not available.