Skip to yearly menu bar Skip to main content


Poster

Communication Efficient Distributed Learning for Kernelized Contextual Bandits

Chuanhao Li · Huazheng Wang · Mengdi Wang · Hongning Wang

Hall J (level 1) #307

Keywords: [ communication efficiency ] [ distributed learning ] [ kernelized method ] [ contextual bandit ]


Abstract: We tackle the communication efficiency challenge of learning kernelized contextual bandits in a distributed setting. Despite the recent advances in communication-efficient distributed bandit learning, existing solutions are restricted to simple models like multi-armed bandits and linear bandits, which hamper their practical utility. In this paper, instead of assuming the existence of a linear reward mapping from the features to the expected rewards, we consider non-linear reward mappings, by letting agents collaboratively search in a reproducing kernel Hilbert space (RKHS). This introduces significant challenges in communication efficiency as distributed kernel learning requires the transfer of raw data, leading to a communication cost that grows linearly w.r.t. time horizon $T$. We addresses this issue by equipping all agents to communicate via a common Nystr\"{o}m embedding that gets updated adaptively as more data points are collected. We rigorously proved that our algorithm can attain sub-linear rate in both regret and communication cost.

Chat is not available.