Timezone: »

Are Anchor Points Really Indispensable in Label-Noise Learning?
Xiaobo Xia · Tongliang Liu · Nannan Wang · Bo Han · Chen Gong · Gang Niu · Masashi Sugiyama

Wed Dec 11 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #25
In label-noise learning, the \textit{noise transition matrix}, denoting the probabilities that clean labels flip into noisy labels, plays a central role in building \textit{statistically consistent classifiers}. Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i.e., data points that belong to a specific class almost surely). However, when there are no anchor points, the transition matrix will be poorly learned, and those previously consistent classifiers will significantly degenerate. In this paper, without employing anchor points, we propose a \textit{transition-revision} ($T$-Revision) method to effectively learn transition matrices, leading to better classifiers. Specifically, to learn a transition matrix, we first initialize it by exploiting data points that are similar to anchor points, having high \textit{noisy class posterior probabilities}. Then, we modify the initialized matrix by adding a \textit{slack variable}, which can be learned and validated together with the classifier by using noisy data. Empirical results on benchmark-simulated and real-world label-noise datasets demonstrate that without using exact anchor points, the proposed method is superior to state-of-the-art label-noise learning methods.

Author Information

Xiaobo Xia (The University of Sydney / Xidian University)
Tongliang Liu (The University of Sydney)
Nannan Wang (Xidian University)
Bo Han (RIKEN)
Chen Gong (Nanjing University of Science and Technology)
Gang Niu (RIKEN)
Gang Niu

Gang Niu is currently an indefinite-term senior research scientist at RIKEN Center for Advanced Intelligence Project.

Masashi Sugiyama (RIKEN / University of Tokyo)

More from the Same Authors