Timezone: »
Maximum a-posteriori (MAP) inference is an important task for many applications. Although the standard formulation gives rise to a hard combinatorial optimization problem, several effective approximations have been proposed and studied in recent years. We focus on linear programming (LP) relaxations, which have achieved state-of-the-art performance in many applications. However, optimization of the resulting program is in general challenging due to non-smoothness and complex non-separable constraints.Therefore, in this work we study the benefits of augmenting the objective function of the relaxation with strong convexity. Specifically, we introduce strong convexity by adding a quadratic term to the LP relaxation objective. We provide theoretical guarantees for the resulting programs, bounding the difference between their optimal value and the original optimum. Further, we propose suitable optimization algorithms and analyze their convergence.
Author Information
Ofer Meshi (TTI Chicago)
Mehrdad Mahdavi (TTI Chicago)
Alex Schwing (University of Toronto)
More from the Same Authors
-
2021 Spotlight: Per-Pixel Classification is Not All You Need for Semantic Segmentation »
Bowen Cheng · Alex Schwing · Alexander Kirillov -
2021 Poster: Bridging the Imitation Gap by Adaptive Insubordination »
Luca Weihs · Unnat Jain · Iou-Jen Liu · Jordi Salvador · Svetlana Lazebnik · Aniruddha Kembhavi · Alex Schwing -
2021 Poster: Per-Pixel Classification is Not All You Need for Semantic Segmentation »
Bowen Cheng · Alex Schwing · Alexander Kirillov -
2021 Poster: A Contrastive Learning Approach for Training Variational Autoencoder Priors »
Jyoti Aneja · Alex Schwing · Jan Kautz · Arash Vahdat -
2021 Poster: Meta-learning with an Adaptive Task Scheduler »
Huaxiu Yao · Yu Wang · Ying Wei · Peilin Zhao · Mehrdad Mahdavi · Defu Lian · Chelsea Finn -
2021 Poster: Class-agnostic Reconstruction of Dynamic Objects from Videos »
Zhongzheng Ren · Xiaoming Zhao · Alex Schwing -
2021 Poster: On Provable Benefits of Depth in Training Graph Convolutional Networks »
Weilin Cong · Morteza Ramezani · Mehrdad Mahdavi -
2021 Poster: Perceptual Score: What Data Modalities Does Your Model Perceive? »
Itai Gat · Idan Schwartz · Alex Schwing -
2016 Poster: Linear-Memory and Decomposition-Invariant Linearly Convergent Conditional Gradient Algorithm for Structured Polytopes »
Dan Garber · Dan Garber · Ofer Meshi -
2016 Oral: Linear-Memory and Decomposition-Invariant Linearly Convergent Conditional Gradient Algorithm for Structured Polytopes »
Dan Garber · Dan Garber · Ofer Meshi -
2016 Poster: Constraints Based Convex Belief Propagation »
Yaniv Tenzer · Alex Schwing · Kevin Gimpel · Tamir Hazan -
2016 Poster: Learning Deep Parsimonious Representations »
Renjie Liao · Alex Schwing · Richard Zemel · Raquel Urtasun -
2014 Poster: Efficient Inference of Continuous Markov Random Fields with Polynomial Potentials »
Shenlong Wang · Alex Schwing · Raquel Urtasun -
2014 Poster: Message Passing Inference for Large Scale Graphical Models with High Order Potentials »
Jian Zhang · Alex Schwing · Raquel Urtasun -
2013 Poster: Latent Structured Active Learning »
Wenjie Luo · Alex Schwing · Raquel Urtasun -
2012 Poster: Globally Convergent Dual MAP LP Relaxation Solvers using Fenchel-Young Margins »
Alex Schwing · Tamir Hazan · Marc Pollefeys · Raquel Urtasun -
2012 Poster: Convergence Rate Analysis of MAP Coordinate Minimization Algorithms »
Ofer Meshi · Tommi Jaakkola · Amir Globerson -
2010 Spotlight: More data means less inference: A pseudo-max approach to structured learning »
David Sontag · Ofer Meshi · Tommi Jaakkola · Amir Globerson -
2010 Poster: More data means less inference: A pseudo-max approach to structured learning »
David Sontag · Ofer Meshi · Tommi Jaakkola · Amir Globerson