Timezone: »

No Regrets for Learning the Prior in Bandits
Soumya Basu · Branislav Kveton · Manzil Zaheer · Csaba Szepesvari

Fri Dec 10 08:30 AM -- 10:00 AM (PST) @

We propose AdaTS, a Thompson sampling algorithm that adapts sequentially to bandit tasks that it interacts with. The key idea in AdaTS is to adapt to an unknown task prior distribution by maintaining a distribution over its parameters. When solving a bandit task, that uncertainty is marginalized out and properly accounted for. AdaTS is a fully-Bayesian algorithm that can be implemented efficiently in several classes of bandit problems. We derive upper bounds on its Bayes regret that quantify the loss due to not knowing the task prior, and show that it is small. Our theory is supported by experiments, where AdaTS outperforms prior algorithms and works well even in challenging real-world problems.

Author Information

Soumya Basu (Google)

I am an SWE in Google, Mountain View. My research interest lies in the theory and practice of Online Learning for enhancing complex systems. I am excited to utilize my expertise in online learning to improve performance across Google's large array of products! Before joining Google, I obtained my Ph.D. in ECE, UT Austin under the supervision of Prof. Evdokia Nikolova and Prof. Sanjay Shakkottai. I was a member of the Wireless Networking and Communication Group (WNCG), UT Austin. Earlier I finished my undergraduate and Master's studies, from IIT Kharagpur, India.

Branislav Kveton (Amazon)
Manzil Zaheer (Google)
Csaba Szepesvari (DeepMind / University of Alberta)

More from the Same Authors