Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Corruption Robust Active Learning

Yifang Chen · Simon Du · Kevin Jamieson

Keywords: [ Machine Learning ] [ Robustness ] [ Active Learning ]


Abstract: We conduct theoretical studies on streaming-based active learning for binary classification under unknown adversarial label corruptions. In this setting, every time before the learner observes a sample, the adversary decides whether to corrupt the label ornot. First, we show that, in a benign corruption setting (which includes the misspecification setting as a special case),with a slight enlargement on the hypothesis elimination threshold, the classical RobustCAL framework can (surprisingly) achieve nearly the same label complexity guarantee as in the non-corrupted setting. However, this algorithm can fail in the general corruption setting. To resolve this drawback, we propose a new algorithm which is provably correct without any assumptions on the presence of corruptions. Furthermore, this algorithm enjoys the minimax label complexity in the non-corrupted setting (which is achieved by RobustCAL) and only requires ˜O(Ctotal) additional labels in the corrupted setting to achieve O(ε+Ctotaln), where ε is the target accuracy, Ctotal is the total number of corruptions and n is the total number of unlabeled samples.

Chat is not available.