This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Learning from Label Proportions: A Mutual Contamination Framework

Clay Scott, Jianxin Zhang

Poster Session 6 (more posters)
on Thu, Dec 10th, 2020 @ 17:00 – 19:00 GMT
Abstract: Learning from label proportions (LLP) is a weakly supervised setting for classification in which unlabeled training instances are grouped into bags, and each bag is annotated with the proportion of each class occurring in that bag. Prior work on LLP has yet to establish a consistent learning procedure, nor does there exist a theoretically justified, general purpose training criterion. In this work we address these two issues by posing LLP in terms of mutual contamination models (MCMs), which have recently been applied successfully to study various other weak supervision settings. In the process, we establish several novel technical results for MCMs, including unbiased losses and generalization error bounds under non-iid sampling plans. We also point out the limitations of a common experimental setting for LLP, and propose a new one based on our MCM framework.

Preview Video and Chat

To see video, interact with the author and ask questions please use registration and login.