Skip to yearly menu bar Skip to main content


Poster

Risk-Aversion in Multi-armed Bandits

Amir Sani · Alessandro Lazaric · Remi Munos

Harrah’s Special Events Center 2nd Floor

Abstract:

In stochastic multi--armed bandits the objective is to solve the exploration--exploitation dilemma and ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this paper, we introduce a novel setting based on the principle of risk--aversion where the objective is to compete against the arm with the best risk--return trade--off. This setting proves to be intrinsically more difficult than the standard multi-arm bandit setting due in part to an exploration risk which introduces a regret associated to the variability of an algorithm. Using variance as a measure of risk, we introduce two new algorithms, we investigate their theoretical guarantees, and we report preliminary empirical results.

Live content is unavailable. Log in and register to view live content