Skip to yearly menu bar Skip to main content


Poster

Quantifying Learning Guarantees for Convex but Inconsistent Surrogates

Kirill Struminsky · Simon Lacoste-Julien · Anton Osokin

Room 517 AB #105

Keywords: [ Structured Prediction ] [ Learning Theory ]


Abstract:

We study consistency properties of machine learning methods based on minimizing convex surrogates. We extend the recent framework of Osokin et al. (2017) for the quantitative analysis of consistency properties to the case of inconsistent surrogates. Our key technical contribution consists in a new lower bound on the calibration function for the quadratic surrogate, which is non-trivial (not always zero) for inconsistent cases. The new bound allows to quantify the level of inconsistency of the setting and shows how learning with inconsistent surrogates can have guarantees on sample complexity and optimization difficulty. We apply our theory to two concrete cases: multi-class classification with the tree-structured loss and ranking with the mean average precision loss. The results show the approximation-computation trade-offs caused by inconsistent surrogates and their potential benefits.

Live content is unavailable. Log in and register to view live content