Skip to yearly menu bar Skip to main content


Tutorial

Linear Programming Relaxations for Graphical Models

Amir Globerson · Tommi Jaakkola

Manuel de Falla

Abstract:

Many probabilistic modeling tasks rely on solving challenging inference problems. These combinatorial problems arise, e.g., in predicting likely values for variables as in selecting and orienting residues in protein design, parsing in natural language processing, or when learning the model structure itself. In many cases, the inference problems involve densely connected variables (or higher order dependences) and are provably hard. However, recent research has shown that some of these difficulties can be overcome by a careful choice of approximation schemes and learning algorithms. These have yielded very encouraging results in wide array of fields, from machine vision and natural language processing to computational biology and signal processing. In this tutorial, we will focus on linear programming (LP) relaxations which have been particularly successful in solving inference problems. Intuitively, LP relaxations decompose a complex problem into a set of simpler subproblems that are subsequently encouraged to agree. If the subproblems agree about a solution, then the solution is the optimal one, otherwise it is fractional. Geometrically, the relaxation maintains an outer bound approximation to a polytope whose vertexes correspond to valid solutions. We will introduce and explain key ideas behind recent approaches, discuss when they can and cannot be applied, how they can be integrated into supervised learning schemes and what efficient message passing algorithms exist for solving them. We will also discuss how similar ideas can be used for calculating marginals. Examples from several applications will be provided, including computational biology, natural language processing (see also a separate tutorial on Dual Decomposition for NLP), and structure learning.

Chat is not available.