Skip to yearly menu bar Skip to main content


Poster

Improved Bayesian Regret Bounds for Thompson Sampling in Reinforcement Learning

Ahmadreza Moradipari · Mohammad Pedramfar · Modjtaba Shokrian Zini · Vaneet Aggarwal

Great Hall & Hall B1+B2 (level 1) #1827
[ ]
Tue 12 Dec 3:15 p.m. PST — 5:15 p.m. PST

Abstract: In this paper, we prove state-of-the-art Bayesian regret bounds for Thompson Sampling in reinforcement learning in a multitude of settings. We present a refined analysis of the information ratio, and show an upper bound of order $\widetilde{O}(H\sqrt{d_{l_1}T})$ in the time inhomogeneous reinforcement learning problem where $H$ is the episode length and $d_{l_1}$ is the Kolmogorov $l_1-$dimension of the space of environments. We then find concrete bounds of $d_{l_1}$ in a variety of settings, such as tabular, linear and finite mixtures, and discuss how our results improve the state-of-the-art.

Chat is not available.