Timezone: »

Mortal Multi-Armed Bandits
Filip Radlinski · Deepayan Chakrabarti · Ravi Kumar · Eli Upfal

Mon Dec 08 08:45 PM -- 12:00 AM (PST) @ None #None
We formulate and study a new variant of the $k$-armed bandit problem, motivated by e-commerce applications. In our model, arms have (stochastic) lifetime after which they expire. In this setting an algorithm needs to continuously explore new arms, in contrast to the standard $k$-armed bandit model in which arms are available indefinitely and exploration is reduced once an optimal arm is identified with near-certainty. The main motivation for our setting is online-advertising, where ads have limited lifetime due to, for example, the nature of their content and their campaign budget. An algorithm needs to choose among a large collection of ads, more than can be fully explored within the ads' lifetime. We present an optimal algorithm for the state-aware (deterministic reward function) case, and build on this technique to obtain an algorithm for the state-oblivious (stochastic reward function) case. Empirical studies on various reward distributions, including one derived from a real-world ad serving application, show that the proposed algorithms significantly outperform the standard multi-armed bandit approaches applied to these settings.

Author Information

Filip Radlinski (Microsoft)
Deepayan Chakrabarti (Yahoo! Research)
Ravi Kumar (Yahoo!)
Eli Upfal (Brown University)

More from the Same Authors