Skip to yearly menu bar Skip to main content


Poster

Deep Supervised Summarization: Algorithm and Application to Learning Instructions

Chengguang Xu · Ehsan Elhamifar

East Exhibition Hall B + C #68

Keywords: [ Algorithms ] [ Sparse Coding and Dimensionality Expansion ] [ Optimization ] [ Convex Optimization ]


Abstract:

We address the problem of finding representative points of datasets by learning from multiple datasets and their ground-truth summaries. We develop a supervised subset selection framework, based on the facility location utility function, which learns to map datasets to their ground-truth representatives. To do so, we propose to learn representations of data so that the input of transformed data to the facility location recovers their ground-truth representatives. Given the NP-hardness of the utility function, we consider its convex relaxation based on sparse representation and investigate conditions under which the solution of the convex optimization recovers ground-truth representatives of each dataset. We design a loss function whose minimization over the parameters of the data representation network leads to satisfying the theoretical conditions, hence guaranteeing recovering ground-truth summaries. Given the non-convexity of the loss function, we develop an efficient learning scheme that alternates between representation learning by minimizing our proposed loss given the current assignments of points to ground-truth representatives and updating assignments given the current data representation. By experiments on the problem of learning key-steps (subactivities) of instructional videos, we show that our proposed framework improves the state-of-the-art supervised subset selection algorithms.

Live content is unavailable. Log in and register to view live content