`

Timezone: »

 
Poster
Online Selective Classification with Limited Feedback
Aditya Gangrade · Anil Kag · Ashok Cutkosky · Venkatesh Saligrama

Thu Dec 09 04:30 PM -- 06:00 PM (PST) @ None #None
Motivated by applications to resource-limited and safety-critical domains, we study selective classification in the online learning model, wherein a predictor may abstain from classifying an instance. For example, this may model an adaptive decision to invoke more resources on this instance. Two salient aspects of the setting we consider are that the data may be non-realisable, due to which abstention may be a valid long-term action, and that feedback is only received when the learner abstains, which models the fact that reliable labels are only available when the resource intensive processing is invoked.Within this framework, we explore strategies that make few mistakes, while not abstaining too many times more than the best-in-hindsight error-free classifier from a given class. That is, the one that makes no mistakes, while abstaining the fewest number of times. We construct simple versioning-based schemes for any $\mu \in (0,1],$ that make most $T^\mu$ mistakes while incurring $\tilde{O}(T^{1-\mu})$ excess abstention against adaptive adversaries. We further show that this dependence on $T$ is tight, and provide illustrative experiments on realistic datasets.

Author Information

Aditya Gangrade (Boston University)
Anil Kag (Boston University)
Ashok Cutkosky (Boston University)
Venkatesh Saligrama (Boston University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors