Skip to yearly menu bar Skip to main content


Poster

Estimation Bias in Multi-Armed Bandit Algorithms for Search Advertising

Min Xu · Tao Qin · Tie-Yan Liu

Harrah's Special Events Center, 2nd Floor

Abstract:

In search advertising, the search engine needs to select the most profitable advertisements to display, which can be formulated as an instance of online learning with partial feedback, also known as the stochastic multi-armed bandit (MAB) problem. In this paper, we show that the naive application of MAB algorithms to search advertising for advertisement selection will produce sample selection bias that harms the search engine by decreasing expected revenue and “estimation of the largest mean” (ELM) bias that harms the advertisers by increasing game-theoretic player-regret. We then propose simple bias-correction methods with benefits to both the search engine and the advertisers.

Live content is unavailable. Log in and register to view live content