Timezone: »
This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify -in pure exploration- or play -in regret minimization- its optimal arm. The twist is that the optimal arm for each agent is the arm with largest expected mixed reward, where the mixed reward of an arm is a weighted sum of the rewards of this arm for all agents. This makes communication between agents often necessary. This general setting allows to recover and extend several recent models for collaborative bandit learning, including the recently proposed federated learning with personalization [Shi et al., 2021]. In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret. We then propose a near-optimal algorithm for pure exploration. This algorithm is based on phased elimination with two novel ingredients: a data-dependent sampling scheme within each phase, aimed at matching a relaxation of the lower bound.
Author Information
Clémence Réda (INRIA)
Sattar Vakili (MediaTek Research)
Emilie Kaufmann (CNRS)
More from the Same Authors
-
2022 : Gradient Descent: Robustness to Adversarial Corruption »
Fu-Chieh Chang · Farhang Nabiei · Pei-Yuan Wu · Alexandru Cioba · Sattar Vakili · Alberto Bernacchia -
2023 Poster: Adaptive Algorithms for Relaxed Pareto Set Identification »
Cyrille KONE · Emilie Kaufmann · Laura Richert -
2023 Poster: Kerenlized Reinforcement Learning with Order Optimal Regret Bounds »
Sattar Vakili · Iuliia Olkhovskaia -
2023 Poster: An $\varepsilon$-Best-Arm Identification Algorithm for Fixed-Confidence and Beyond »
Marc Jourdan · Rémy Degenne · Emilie Kaufmann -
2022 Panel: Panel 1A-1: Near-Optimal Collaborative Learning… & Minimax Regret for… »
Clémence Réda · Daniel Vial -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 Poster: Top Two Algorithms Revisited »
Marc Jourdan · Rémy Degenne · Dorian Baudry · Rianne de Heide · Emilie Kaufmann -
2022 Poster: Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs »
Andrea Tirinzoni · Aymen Al Marjani · Emilie Kaufmann -
2022 Poster: Efficient Change-Point Detection for Tackling Piecewise-Stationary Bandits »
Lilian Besson · Emilie Kaufmann · Odalric-Ambrym Maillard · Julien Seznec -
2021 Poster: A Domain-Shrinking based Bayesian Optimization Algorithm with Order-Optimal Regret Performance »
Sudeep Salgia · Sattar Vakili · Qing Zhao -
2021 Poster: Optimal Order Simple Regret for Gaussian Process Bandits »
Sattar Vakili · Nacime Bouziani · Sepehr Jalali · Alberto Bernacchia · Da-shan Shiu -
2021 Poster: Scalable Thompson Sampling using Sparse Gaussian Process Models »
Sattar Vakili · Henry Moss · Artem Artemev · Vincent Dutordoir · Victor Picheny -
2021 Poster: Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification »
Clémence Réda · Andrea Tirinzoni · Rémy Degenne -
2020 Poster: Sub-sampling for Efficient Non-Parametric Bandit Exploration »
Dorian Baudry · Emilie Kaufmann · Odalric-Ambrym Maillard -
2020 Spotlight: Sub-sampling for Efficient Non-Parametric Bandit Exploration »
Dorian Baudry · Emilie Kaufmann · Odalric-Ambrym Maillard -
2020 Poster: Planning in Markov Decision Processes with Gap-Dependent Sample Complexity »
Anders Jonsson · Emilie Kaufmann · Pierre Menard · Omar Darwiche Domingues · Edouard Leurent · Michal Valko