Skip to yearly menu bar Skip to main content


Poster

Differentiable Learning of Submodular Functions

Josip Djolonga · Andreas Krause

Pacific Ballroom #157

Keywords: [ Submodular Optimization ] [ Graphical Models ] [ Variational Inference ] [ Combinatorial Optimization ]


Abstract:

Can we incorporate discrete optimization algorithms within modern machine learning models? For example, is it possible to use in deep architectures a layer whose output is the minimal cut of a parametrized graph? Given that these models are trained end-to-end by leveraging gradient information, the introduction of such layers seems very challenging due to their non-continuous output. In this paper we focus on the problem of submodular minimization, for which we show that such layers are indeed possible. The key idea is that we can continuously relax the output without sacrificing guarantees. We provide an easily computable approximation to the Jacobian complemented with a complete theoretical analysis. Finally, these contributions let us experimentally learn probabilistic log-supermodular models via a bi-level variational inference formulation.

Live content is unavailable. Log in and register to view live content