Skip to yearly menu bar Skip to main content


Poster

Online Improper Learning with an Approximation Oracle

Elad Hazan · Wei Hu · Yuanzhi Li · Zhiyuan Li

Room 517 AB #134

Keywords: [ Online Learning ]


Abstract:

We study the following question: given an efficient approximation algorithm for an optimization problem, can we learn efficiently in the same setting? We give a formal affirmative answer to this question in the form of a reduction from online learning to offline approximate optimization using an efficient algorithm that guarantees near optimal regret. The algorithm is efficient in terms of the number of oracle calls to a given approximation oracle – it makes only logarithmically many such calls per iteration. This resolves an open question by Kalai and Vempala, and by Garber. Furthermore, our result applies to the more general improper learning problems.

Live content is unavailable. Log in and register to view live content