Skip to yearly menu bar Skip to main content


Poster

Near-Minimax-Optimal Distributional Reinforcement Learning with a Generative Model

Mark Rowland · Kevin Li · Remi Munos · Clare Lyle · Yunhao Tang · Will Dabney

West Ballroom A-D #6609
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

We propose a new algorithm for model-based distributional reinforcement learning (RL), and prove that it is minimax-optimal for approximating return distributions in the generative model regime (up to logarithmic factors), the first result of this kind for any distributional RL algorithm. Our analysis also provides new theoretical perspectives on categorical approaches to distributional RL, as well as introducing a new distributional Bellman equation, the stochastic categorical CDF Bellman equation, which we expect to be of independent interest. Finally, we provide an experimental study comparing a variety of model-based distributional RL algorithms, with several key takeaways for practitioners.

Live content is unavailable. Log in and register to view live content