Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Learning on Distributions, Functions, Graphs and Groups

On Structured Prediction Theory with Calibrated Convex Surrogate Losses.

Simon Lacoste-Julien

[ ] [ Project Page ]
2017 Talk

Abstract:

We provide novel theoretical insights on structured prediction in the context of efficient convex surrogate loss minimization with consistency guarantees. For any task loss, we construct a convex surrogate that can be optimized via stochastic gradient descent and we prove tight bounds on the so-called "calibration function" relating the excess surrogate risk to the actual risk. In contrast to prior related work, we carefully monitor the effect of the exponential number of classes in the learning guarantees as well as on the optimization complexity. As an interesting consequence, we formalize the intuition that some task losses make learning harder than others, and that the classical 0-1 loss is ill-suited for general structured prediction.

This (https://arxiv.org/abs/1703.02403) is joint work with Anton Osokin and Francis Bach.

Live content is unavailable. Log in and register to view live content