Skip to yearly menu bar Skip to main content


Poster

A Smoother Way to Train Structured Prediction Models

Krishna Pillutla · Vincent Roulet · Sham Kakade · Zaid Harchaoui

Room 517 AB #102

Keywords: [ Structured Prediction ] [ Convex Optimization ]


Abstract:

We present a framework to train a structured prediction model by performing smoothing on the inference algorithm it builds upon. Smoothing overcomes the non-smoothness inherent to the maximum margin structured prediction objective, and paves the way for the use of fast primal gradient-based optimization algorithms. We illustrate the proposed framework by developing a novel primal incremental optimization algorithm for the structural support vector machine. The proposed algorithm blends an extrapolation scheme for acceleration and an adaptive smoothing scheme and builds upon the stochastic variance-reduced gradient algorithm. We establish its worst-case global complexity bound and study several practical variants. We present experimental results on two real-world problems, namely named entity recognition and visual object localization. The experimental results show that the proposed framework allows us to build upon efficient inference algorithms to develop large-scale optimization algorithms for structured prediction which can achieve competitive performance on the two real-world problems.

Live content is unavailable. Log in and register to view live content