Poster
Model Selection for Contextual Bandits
Dylan Foster · Akshay Krishnamurthy · Haipeng Luo
East Exhibition Hall B, C #5
Keywords: [ Bandit Algorithms ] [ Algorithms ] [ Model Selection and Structure Learning ]
[
Abstract
]
Abstract:
We introduce the problem of model selection for contextual bandits, where a
learner must adapt to the complexity of the optimal policy while balancing exploration and exploitation. Our main result is a new model selection guarantee for linear contextual bandits. We work in the stochastic realizable setting with a sequence of nested linear policy classes of dimension d1<d2<…,
where the m⋆-th class contains the optimal policy, and we design an
algorithm that achieves ~O(T2/3d1/3m⋆)
regret with no prior knowledge of the optimal dimension
dm⋆. The algorithm also achieves regret ~O(T3/4+√Tdm⋆),
which is optimal for dm⋆≥√T. This is the first model selection result for contextual bandits with non-vacuous regret for
all values of dm⋆, and to the best of our knowledge is the first positive result of this type for any online learning setting with partial information. The core of the algorithm is a new estimator for the gap in the best loss
achievable by two linear policy classes, which we show admits a
convergence rate faster than the rate required to learn the parameters for either class.
Live content is unavailable. Log in and register to view live content