Skip to yearly menu bar Skip to main content


Poster

Class-Distribution-Aware Pseudo-Labeling for Semi-Supervised Multi-Label Learning

Ming-Kun Xie · Jiahao Xiao · Hao-Zhe Liu · Gang Niu · Masashi Sugiyama · Sheng-Jun Huang

Great Hall & Hall B1+B2 (level 1) #1104
[ ]
Wed 13 Dec 3 p.m. PST — 5 p.m. PST

Abstract:

Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data. However, in the context of semi-supervised multi-label learning (SSMLL), conventional pseudo-labeling methods encounter difficulties when dealing with instances associated with multiple labels and an unknown label count. These limitations often result in the introduction of false positive labels or the neglect of true positive ones. To overcome these challenges, this paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner. The proposed approach introduces a regularized learning framework incorporating class-aware thresholds, which effectively control the assignment of positive and negative pseudo-labels for each class. Notably, even with a small proportion of labeled examples, our observations demonstrate that the estimated class distribution serves as a reliable approximation. Motivated by this finding, we develop a class-distribution-aware thresholding strategy to ensure the alignment of pseudo-label distribution with the true distribution. The correctness of the estimated class distribution is theoretically verified, and a generalization error bound is provided for our proposed method. Extensive experiments on multiple benchmark datasets confirm the efficacy of CAP in addressing the challenges of SSMLL problems.

Chat is not available.