Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 4th Workshop on Self-Supervised Learning: Theory and Practice

Does Unconstrained Unlabeled Data Help Semi-Supervised Learning?

Shuvendu Roy · Ali Etemad


Abstract:

In this work, we study the role of unconstrained unlabeled data in semi-supervised learning and propose a semi-supervised learning framework which can learn effective representations from such unlabeled data. Most existing semi-supervised methods rely on the assumption that labelled and unlabeled samples are drawn from the same distribution, which limits the potential for improvement through the use of free-living unlabeled data. Consequently, the generalizability and scalability of semi-supervised learning are often hindered by this assumption. Our method aims to overcome these constraints and effectively utilize unconstrained unlabeled data in semi-supervised learning. UnMixMatch consists of three main components: a supervised learner with hard augmentations that provides strong regularization, a contrastive consistency regularizer to learn underlying representations from the unlabeled data and a self-supervised loss to enhance the representations that are learnt from the unlabeled data. We perform extensive experiments on 4 commonly used datasets and demonstrate superior performance over existing semi-supervised methods with a performance boost of 4.79\%. Extensive ablation and sensitivity studies show the effectiveness of each of the proposed components of our method.

Chat is not available.