Timezone: »

Dual Decomposed Learning with Factorwise Oracle for Structural SVM of Large Output Domain
Ian En-Hsu Yen · Xiangru Huang · Kai Zhong · Ruohan Zhang · Pradeep Ravikumar · Inderjit Dhillon

Mon Dec 05 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #89
Many applications of machine learning involve structured output with large domain, where learning of structured predictor is prohibitive due to repetitive calls to expensive inference oracle. In this work, we show that, by decomposing training of Structural Support Vector Machine (SVM) into a series of multiclass SVM problems connected through messages, one can replace expensive structured oracle with Factorwise Maximization Oracle (FMO) that allows efficient implementation of complexity sublinear to the factor domain. A Greedy Direction Method of Multiplier (GDMM) algorithm is proposed to exploit sparsity of messages which guarantees $\epsilon$ sub-optimality after $O(log(1/\epsilon))$ passes of FMO calls. We conduct experiments on chain-structured problems and fully-connected problems of large output domains. The proposed approach is orders-of-magnitude faster than the state-of-the-art training algorithms for Structural SVM.

Author Information

Ian En-Hsu Yen (University of Texas at Austin)
Xiangru Huang (University of Texas at Austin)

I'm currently a PhD student in University of Texas at Austin. My advisor is Qixing Huang.

Kai Zhong (University of Texas at Austin)
Ruohan Zhang (University of Texas at Austin)
Pradeep Ravikumar (Carnegie Mellon University)
Inderjit Dhillon (Google & UT Austin)

More from the Same Authors