Skip to yearly menu bar Skip to main content


Poster

An Information-Theoretic Analysis for Thompson Sampling with Many Actions

Shi Dong · Benjamin Van Roy

Room 517 AB #159

Keywords: [ Learning Theory ] [ Bandit Algorithms ] [ Information Theory ]


Abstract:

Information-theoretic Bayesian regret bounds of Russo and Van Roy capture the dependence of regret on prior uncertainty. However, this dependence is through entropy, which can become arbitrarily large as the number of actions increases. We establish new bounds that depend instead on a notion of rate-distortion. Among other things, this allows us to recover through information-theoretic arguments a near-optimal bound for the linear bandit. We also offer a bound for the logistic bandit that dramatically improves on the best previously available, though this bound depends on an information-theoretic statistic that we have only been able to quantify via computation.

Live content is unavailable. Log in and register to view live content