Skip to yearly menu bar Skip to main content


Poster

Efficient Sublinear-Regret Algorithms for Online Sparse Linear Regression with Limited Observation

Shinji Ito · Daisuke Hatano · Hanna Sumita · Akihiro Yabe · Takuro Fukunaga · Naonori Kakimura · Ken-Ichi Kawarabayashi

Pacific Ballroom #35

Keywords: [ Bandit Algorithms ] [ Online Learning ] [ Convex Optimization ] [ Sparsity and Compressed Sensing ] [ Regression ] [ Hardness of Learning and Approximations ] [ Computational Complexity ]


Abstract: Online sparse linear regression is the task of applying linear regression analysis to examples arriving sequentially subject to a resource constraint that a limited number of features of examples can be observed. Despite its importance in many practical applications, it has been recently shown that there is no polynomial-time sublinear-regret algorithm unless NP$\subseteq$BPP, and only an exponential-time sublinear-regret algorithm has been found. In this paper, we introduce mild assumptions to solve the problem. Under these assumptions, we present polynomial-time sublinear-regret algorithms for the online sparse linear regression. In addition, thorough experiments with publicly available data demonstrate that our algorithms outperform other known algorithms.

Live content is unavailable. Log in and register to view live content