Timezone: »

 
Spotlight
Tight First- and Second-Order Regret Bounds for Adversarial Linear Bandits
Shinji Ito · Shuichi Hirahara · Tasuku Soma · Yuichi Yoshida

Wed Dec 09 07:10 PM -- 07:20 PM (PST) @ Orals & Spotlights: Learning Theory

We propose novel algorithms with first- and second-order regret bounds for adversarial linear bandits. These regret bounds imply that our algorithms perform well when there is an action achieving a small cumulative loss or the loss has a small variance. In addition, we need only assumptions weaker than those of existing algorithms; our algorithms work on discrete action sets as well as continuous ones without a priori knowledge about losses, and they run efficiently if a linear optimization oracle for the action set is available. These results are obtained by combining optimistic online optimization, continuous multiplicative weight update methods, and a novel technique that we refer to as distribution truncation. We also show that the regret bounds of our algorithms are tight up to polylogarithmic factors.

Author Information

Shinji Ito (NEC Corporation)
Shuichi Hirahara (National Institute of Informatics)
Tasuku Soma (University of Tokyo)
Yuichi Yoshida (National Institute of Informatics and Preferred Infrastructure, Inc.)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors