Timezone: »
A successful approach to structured learning is to write the learning objective as a joint function of linear parameters and inference messages, and iterate between updates to each. This paper observes that if the inference problem is “smoothed” through the addition of entropy terms, for fixed messages, the learning objective reduces to a traditional (non-structured) logistic regression problem with respect to parameters. In these logistic regression problems, each training example has a bias term determined by the current set of messages. Based on this insight, the structured energy function can be extended from linear factors to any function class where an “oracle” exists to minimize a logistic loss.
Author Information
Justin Domke (NICTA)
Related Events (a corresponding poster, oral, or spotlight)
-
2013 Spotlight: Structured Learning via Logistic Regression »
Fri. Dec 6th 08:00 -- 08:04 PM Room Harvey's Convention Center Floor, CC
More from the Same Authors
-
2015 Poster: Maximum Likelihood Learning With Arbitrary Treewidth via Fast-Mixing Parameter Sets »
Justin Domke -
2015 Poster: Reflection, Refraction, and Hamiltonian Monte Carlo »
Hadi Mohasel Afshar · Justin Domke -
2014 Poster: Projecting Markov Random Field Parameters for Fast Mixing »
Xianghang Liu · Justin Domke -
2013 Poster: Projecting Ising Model Parameters for Fast Mixing »
Justin Domke · Xianghang Liu