Skip to yearly menu bar Skip to main content


Poster

Worst-Case Regret Bounds for Exploration via Randomized Value Functions

Daniel Russo

East Exhibition Hall B + C #184

Keywords: [ Reinforcement Learning and Planning ] [ Exploration ] [ Markov Decision Processes; ] [ Reinforcement Learning and Planning -> Decision and Control; Reinforcement Learning and Planning ]


Abstract:

This paper studies a recent proposal to use randomized value functions to drive exploration in reinforcement learning. These randomized value functions are generated by injecting random noise into the training data, making the approach compatible with many popular methods for estimating parameterized value functions. By providing a worst-case regret bound for tabular finite-horizon Markov decision processes, we show that planning with respect to these randomized value functions can induce provably efficient exploration.

Live content is unavailable. Log in and register to view live content