Timezone: »

 
Poster
Online Improper Learning with an Approximation Oracle
Elad Hazan · Wei Hu · Yuanzhi Li · Zhiyuan Li

Thu Dec 06 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #134

We study the following question: given an efficient approximation algorithm for an optimization problem, can we learn efficiently in the same setting? We give a formal affirmative answer to this question in the form of a reduction from online learning to offline approximate optimization using an efficient algorithm that guarantees near optimal regret. The algorithm is efficient in terms of the number of oracle calls to a given approximation oracle – it makes only logarithmically many such calls per iteration. This resolves an open question by Kalai and Vempala, and by Garber. Furthermore, our result applies to the more general improper learning problems.

Author Information

Elad Hazan (Princeton University)
Wei Hu (Princeton University)
Yuanzhi Li (Princeton)
Zhiyuan Li (Princeton University)

More from the Same Authors