Timezone: »
We introduce a new multi-armed bandit model where the reward is a sum of multiple random variables, and each action only alters the distributions of some of these variables. Upon taking an action, the agent observes the realizations of all variables. This model is motivated by marketing campaigns and recommender systems, where the variables represent outcomes on individual customers, such as clicks. We propose UCB-style algorithms that estimate the uplifts of the actions over a baseline. We study multiple variants of the problem, including when the baseline and affected variables are unknown, and prove sublinear regret bounds for all of these. In addition, we provide regret lower bounds that justify the necessity of our modeling assumptions. Experiments on synthetic and real-world datasets demonstrate the benefit of methods that estimate the uplifts over policies that do not use this structure.
Author Information
Yu-Guan Hsieh (Université Grenoble Alpes / Inria)
Shiva Kasiviswanathan (Amazon)
Branislav Kveton (Amazon)
More from the Same Authors
-
2021 : Reconstructing Test Labels from Noisy Loss Scores (Extended Abstract) »
Abhinav Aggarwal · Shiva Kasiviswanathan · Zekun Xu · Oluwaseyi Feyisetan · Nathanael Teissier -
2022 : Diffusion Prior for Online Decision Making: A Case Study of Thompson Sampling »
Yu-Guan Hsieh · Shiva Kasiviswanathan · Branislav Kveton · Patrick Blöbaum -
2022 Poster: No-regret learning in games with noisy feedback: Faster rates and adaptivity via learning rate separation »
Yu-Guan Hsieh · Kimon Antonakopoulos · Volkan Cevher · Panayotis Mertikopoulos -
2021 Poster: No Regrets for Learning the Prior in Bandits »
Soumya Basu · Branislav Kveton · Manzil Zaheer · Csaba Szepesvari -
2021 Poster: Collaborative Causal Discovery with Atomic Interventions »
Raghavendra Addanki · Shiva Kasiviswanathan -
2020 Poster: Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling »
Yu-Guan Hsieh · Franck Iutzeler · Jérôme Malick · Panayotis Mertikopoulos -
2020 Spotlight: Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling »
Yu-Guan Hsieh · Franck Iutzeler · Jérôme Malick · Panayotis Mertikopoulos -
2020 Poster: Differentiable Meta-Learning of Bandit Policies »
Craig Boutilier · Chih-wei Hsu · Branislav Kveton · Martin Mladenov · Csaba Szepesvari · Manzil Zaheer -
2020 Poster: Latent Bandits Revisited »
Joey Hong · Branislav Kveton · Manzil Zaheer · Yinlam Chow · Amr Ahmed · Craig Boutilier -
2019 Poster: On the convergence of single-call stochastic extra-gradient methods »
Yu-Guan Hsieh · Franck Iutzeler · Jérôme Malick · Panayotis Mertikopoulos -
2018 Poster: TopRank: A practical algorithm for online stochastic ranking »
Tor Lattimore · Branislav Kveton · Shuai Li · Csaba Szepesvari -
2017 Poster: Online Influence Maximization under Independent Cascade Model with Semi-Bandit Feedback »
Zheng Wen · Branislav Kveton · Michal Valko · Sharan Vaswani -
2015 Poster: Efficient Thompson Sampling for Online Matrix-Factorization Recommendation »
Jaya Kawale · Hung H Bui · Branislav Kveton · Long Tran-Thanh · Sanjay Chawla -
2015 Poster: Combinatorial Cascading Bandits »
Branislav Kveton · Zheng Wen · Azin Ashkan · Csaba Szepesvari -
2012 Poster: Online L1-Dictionary Learning with Application to Novel Document Detection »
Shiva Kasiviswanathan · Huahua Wang · Arindam Banerjee · Prem Melville