Skip to yearly menu bar Skip to main content


Poster

Avoiding False Positive in Multi-Instance Learning

Yanjun Han · Qing Tao · Jue Wang


Abstract:

In multi-instance learning, there are two kinds of prediction failure, i.e., false negative and false positive. Current research mainly focus on avoding the former. We attempt to utilize the geometric distribution of instances inside positive bags to avoid both the former and the latter. Based on kernel principal component analysis, we define a projection constraint for each positive bag to classify its constituent instances far away from the separating hyperplane while place positive instances and negative instances at opposite sides. We apply the Constrained Concave-Convex Procedure to solve the resulted problem. Empirical results demonstrate that our approach offers improved generalization performance.

Live content is unavailable. Log in and register to view live content