`

Timezone: »

 
Poster
Cooperative Stochastic Bandits with Asynchronous Agents and Constrained Feedback
Lin Yang · Yu-Zhen Janice Chen · Stephen Pasteris · Mohammad Hajiesmaili · John C. S. Lui · Don Towsley

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None
This paper studies a cooperative multi-armed bandit problem with $M$ agents cooperating together to solve the same instance of a $K$-armed stochastic bandit problem with the goal of maximizing the cumulative reward of agents. The agents are heterogeneous in (i) their limited access to a local subset of arms; and (ii) their decision-making rounds, i.e., agents are asynchronous with different decision-making gaps. The goal is to find the global optimal arm and agents are able to pull any arm, however, they observe the reward only when the selected arm is local.The challenge is a tradeoff for agents between pulling a local arm with the possibility of observing the feedback, or relying on the observations of other agents that might occur at different rates. Naive extensions of traditional algorithms lead to an arbitrarily poor regret as a function of aggregate action frequency of any $\textit{suboptimal}$ arm located in slow agents. We resolve this issue by proposing a novel two-stage learning algorithm, called $\texttt{CO-LCB}$ algorithm, whose regret is a function of aggregate action frequency of agents containing the $\textit{optimal}$ arm. We also show that the regret of $\texttt{CO-LCB}$ matches the regret lower bound up to a small factor.

Author Information

Lin Yang (UMass)
Yu-Zhen Janice Chen (College of Information and Computer Science, University of Massachusetts, Amherst)
Stephen Pasteris (University College London)
Mohammad Hajiesmaili (UMass Amherst)
John C. S. Lui (The Chinese University of Hong Kong)
Don Towsley (UMass - Amherst)

More from the Same Authors