Timezone: »

 
Poster
Near-optimal Reinforcement Learning in Factored MDPs
Ian Osband · Benjamin Van Roy

Wed Dec 10 04:00 PM -- 08:59 PM (PST) @ Level 2, room 210D
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suffer $\Omega(\sqrt{SAT})$ regret on some MDP, where $T$ is the elapsed time and $S$ and $A$ are the cardinalities of the state and action spaces. This implies $T = \Omega(SA)$ time to guarantee a near-optimal policy. In many settings of practical interest, due to the curse of dimensionality, $S$ and $A$ can be so enormous that this learning time is unacceptable. We establish that, if the system is known to be a \emph{factored} MDP, it is possible to achieve regret that scales polynomially in the number of \emph{parameters} encoding the factored MDP, which may be exponentially smaller than $S$ or $A$. We provide two algorithms that satisfy near-optimal regret bounds in this context: posterior sampling reinforcement learning (PSRL) and an upper confidence bound algorithm (UCRL-Factored).

Author Information

Ian Osband (DeepMind)
Benjamin Van Roy (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors