## Online learning in MDPs with linear function approximation and bandit feedback.

### Gergely Neu · Iuliia Olkhovskaia

Keywords: [ Reinforcement Learning and Planning ] [ Online Learning ] [ Bandits ]

[ Abstract ]
[
Tue 7 Dec 8:30 a.m. PST — 10 a.m. PST

Abstract: We consider the problem of online learning in an episodic Markov decision process, where the reward function is allowed to change between episodes in an adversarial manner and the learner only observes the rewards associated with its actions. We assume that rewards and the transition function can be represented as linear functions in terms of a known low-dimensional feature map, which allows us to consider the setting where the state space is arbitrarily large. We also assume that the learner has a perfect knowledge of the MDP dynamics. Our main contribution is developing an algorithm whose expected regret after $T$ episodes is bounded by $\widetilde{\mathcal{O}}(\sqrt{dHT})$, where $H$ is the number of steps in each episode and $d$ is the dimensionality of the feature map.

Chat is not available.