Timezone: »
This paper introduces and addresses a wide class of stochastic bandit problems where the function mapping the arm to the corresponding reward exhibits some known structural properties. Most existing structures (e.g. linear, lipschitz, unimodal, combinatorial, dueling,...) are covered by our framework. We derive an asymptotic instance-specific regret lower bound for these problems, and develop OSSB, an algorithm whose regret matches this fundamental limit. OSSB is not based on the classical principle of ``optimism in the face of uncertainty'' or on Thompson sampling, and rather aims at matching the minimal exploration rates of sub-optimal arms as characterized in the derivation of the regret lower bound. We illustrate the efficiency of OSSB using numerical experiments in the case of the linear bandit problem and show that OSSB outperforms existing algorithms, including Thompson sampling
Author Information
Richard Combes (Centrale-Supelec)
I am currently an assistant professor in Centrale-Supelec in the Telecommunication department. I received the Engineering Degree from Telecom Paristech (2008), the Master Degree in Mathematics from university of Paris VII (2009) and the Ph.D. degree in Mathematics from university of Paris VI (2013). I was a visiting scientist at INRIA (2012) and a post-doc in KTH (2013). I received the best paper award at CNSM 2011. My current research interests are machine learning, networks and probability.
Stefan Magureanu (KTH)
Alexandre Proutiere (KTH)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Spotlight: Minimal Exploration in Structured Stochastic Bandits »
Wed. Dec 6th 01:15 -- 01:20 AM Room Hall C
More from the Same Authors
-
2021 Poster: Fast Pure Exploration via Frank-Wolfe »
Po-An Wang · Ruo-Chun Tzeng · Alexandre Proutiere -
2021 Poster: Navigating to the Best Policy in Markov Decision Processes »
Aymen Al Marjani · AurĂ©lien Garivier · Alexandre Proutiere -
2021 Poster: On the Suboptimality of Thompson Sampling in High Dimensions »
Raymond Zhang · Richard Combes -
2020 Poster: Regret in Online Recommendation Systems »
Kaito Ariu · Narae Ryu · Se-Young Yun · Alexandre Proutiere -
2020 Poster: Optimal Best-arm Identification in Linear Bandits »
Yassir Jedra · Alexandre Proutiere -
2019 Poster: Optimal Sampling and Clustering in the Stochastic Block Model »
Se-Young Yun · Alexandre Proutiere -
2018 Poster: Exploration in Structured Reinforcement Learning »
Jungseul Ok · Alexandre Proutiere · Damianos Tranos -
2018 Oral: Exploration in Structured Reinforcement Learning »
Jungseul Ok · Alexandre Proutiere · Damianos Tranos -
2017 Poster: A Minimax Optimal Algorithm for Crowdsourcing »
Thomas Bonald · Richard Combes -
2016 Poster: Optimal Cluster Recovery in the Labeled Stochastic Block Model »
Se-Young Yun · Alexandre Proutiere -
2015 Poster: Fast and Memory Optimal Low-Rank Matrix Approximation »
Se-Young Yun · marc lelarge · Alexandre Proutiere -
2015 Poster: Combinatorial Bandits Revisited »
Richard Combes · M. Sadegh Talebi · Alexandre Proutiere · marc lelarge -
2014 Poster: Streaming, Memory Limited Algorithms for Community Detection »
Se-Young Yun · marc lelarge · Alexandre Proutiere -
2013 Poster: Two-Target Algorithms for Infinite-Armed Bandits with Bernoulli Rewards »
Thomas Bonald · Alexandre Proutiere