Skip to yearly menu bar Skip to main content


Poster

Learning from Label Proportions by Learning with Label Noise

Jianxin Zhang · Yutong Wang · Clay Scott

Hall J (level 1) #536

Keywords: [ Learning from Label Noise ] [ Learning Theory ] [ Semi-Supervised Learning ] [ machine learning ] [ learning from label proportions ]


Abstract:

Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels. The task is to learn a classifier to predict the labels of future individual instances. Prior work on LLP for multi-class data has yet to develop a theoretically grounded algorithm. In this work, we propose an approach to LLP based on a reduction to learning with label noise, using the forward correction (FC) loss of \textcite{Patrini2017MakingDN}. We establish an excess risk bound and generalization error analysis for our approach, while also extending the theory of the FC loss which may be of independent interest. Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures, compared to the leading methods.

Chat is not available.