Skip to yearly menu bar Skip to main content


Contextual semibandits via supervised learning oracles

Akshay Krishnamurthy · Alekh Agarwal · Miro Dudik

Area 5+6+7+8 #36

Keywords: [ Ranking and Preference Learning ] [ Online Learning ] [ Learning Theory ] [ Bandit Algorithms ]


We study an online decision making problem where on each round a learner chooses a list of items based on some side information, receives a scalar feedback value for each individual item, and a reward that is linearly related to this feedback. These problems, known as contextual semibandits, arise in crowdsourcing, recommendation, and many other domains. This paper reduces contextual semibandits to supervised learning, allowing us to leverage powerful supervised learning methods in this partial-feedback setting. Our first reduction applies when the mapping from feedback to reward is known and leads to a computationally efficient algorithm with near-optimal regret. We show that this algorithm outperforms state-of-the-art approaches on real-world learning-to-rank datasets, demonstrating the advantage of oracle-based algorithms. Our second reduction applies to the previously unstudied setting when the linear mapping from feedback to reward is unknown. Our regret guarantees are superior to prior techniques that ignore the feedback.

Live content is unavailable. Log in and register to view live content