Skip to yearly menu bar Skip to main content


Estimating the class prior and posterior from noisy positives and unlabeled data

Shantanu Jain · Martha White · Predrag Radivojac

Area 5+6+7+8 #78

Keywords: [ Semi-Supervised Learning ] [ Nonlinear Dimension Reduction and Manifold Learning ] [ Sparsity and Feature Selection ] [ Convex Optimization ] [ (Other) Statistics ] [ Kernel Methods ]


We develop a classification algorithm for estimating posterior distributions from positive-unlabeled data, that is robust to noise in the positive labels and effective for high-dimensional data. In recent years, several algorithms have been proposed to learn from positive-unlabeled data; however, many of these contributions remain theoretical, performing poorly on real high-dimensional data that is typically contaminated with noise. We build on this previous work to develop two practical classification algorithms that explicitly model the noise in the positive labels and utilize univariate transforms built on discriminative classifiers. We prove that these univariate transforms preserve the class prior, enabling estimation in the univariate space and avoiding kernel density estimation for high-dimensional data. The theoretical development and parametric and nonparametric algorithms proposed here constitute an important step towards wide-spread use of robust classification algorithms for positive-unlabeled data.

Live content is unavailable. Log in and register to view live content