Timezone: »

On the Suboptimality of Thompson Sampling in High Dimensions
Raymond Zhang · Richard Combes

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

In this paper we consider Thompson Sampling for combinatorial semi-bandits. We demonstrate that, perhaps surprisingly, Thompson Sampling is sub-optimal for this problem in the sense that its regret scales exponentially in the ambient dimension, and its minimax regret scales almost linearly. This phenomenon occurs under a wide variety of assumptions including both non-linear and linear reward functions in the Bernoulli distribution setting. We also show that including a fixed amount of forced exploration to Thompson Sampling does not alleviate the problem. We complement our theoretical results with numerical results and show that in practice Thompson Sampling indeed can perform very poorly in some high dimension situations.

Author Information

Raymond Zhang (Ecole Normale Supérieure Paris-Saclay)
Richard Combes (Centrale-Supelec)

I am currently an assistant professor in Centrale-Supelec in the Telecommunication department. I received the Engineering Degree from Telecom Paristech (2008), the Master Degree in Mathematics from university of Paris VII (2009) and the Ph.D. degree in Mathematics from university of Paris VI (2013). I was a visiting scientist at INRIA (2012) and a post-doc in KTH (2013). I received the best paper award at CNSM 2011. My current research interests are machine learning, networks and probability.

More from the Same Authors