Timezone: »

Effect of mixup Training on Representation Learning
Arslan Chaudhry · Aditya Menon · Andreas Veit · Sadeep Jayasumana · Srikumar Ramalingam · Sanjiv Kumar
Event URL: https://openreview.net/forum?id=qu3pJkk6Ngg »

Mixup is a regularization technique that artificially produces new samples using convex combinations of original training points. This simple technique has shown strong empirical performance, and has been heavily used as part of semi-supervised learning techniques such as mixmatch~\citep{berthelot2019mixmatch} and interpolation consistent training (ICT)~\citep{verma2019interpolation}. In this paper, we look at mixup through a representation learning lens in a semi-supervised learning setup. In particular, we study the role of mixup in promoting linearity in the learned network representations. Towards this, we study two questions: (1) how does the mixup loss that enforces linearity in the last network layer propagate the linearity to the earlier layers?; and (2) how does the enforcement of stronger mixup loss on more than two data points affect the convergence of training? We empirically investigate these properties of mixup on vision datasets such as CIFAR-10, CIFAR-100 and SVHN. Our results show that supervised mixup training does not make all the network layers linear;in fact the intermediate layers become more non-linear during mixup training compared to a network that is trained without mixup. However, when mixup is used as an unsupervised loss, we observe that all the network layers become more linear resulting in faster training convergence.

Author Information

Arslan Chaudhry (DeepMind)

I am a Research Scientist at DeepMind in Mountain View. I am interested in Machine Learning models that can learn efficiently from multiple tasks. Towards this, I study continual, meta- and transfer learning.

Aditya Menon (Google)
Andreas Veit (Google)
Sadeep Jayasumana (Google)
Srikumar Ramalingam (Google)
Sanjiv Kumar (Google Research)

More from the Same Authors