Timezone: »
We present a new model-based algorithm for reinforcement learning (RL) which consists of explicit exploration and exploitation phases, and is applicable in large or infinite state spaces. The algorithm maintains a set of dynamics models consistent with current experience and explores by finding policies which induce high dis- agreement between their state predictions. It then exploits using the refined set of models or experience gathered during exploration. We show that under realizability and optimal planning assumptions, our algorithm provably finds a near-optimal policy with a number of samples that is polynomial in a structural complexity measure which we show to be low in several natural settings. We then give a practical approximation using neural networks and demonstrate its performance and sample efficiency in practice.
Author Information
Mikael Henaff (Microsoft Research)
More from the Same Authors
-
2021 : Imitation Learning from Pixel Observations for Continuous Control »
Samuel Cohen · Brandon Amos · Marc Deisenroth · Mikael Henaff · Eugene Vinitsky · Denis Yarats -
2022 : Integrating Episodic and Global Bonuses for Efficient Exploration »
Mikael Henaff · Minqi Jiang · Roberta Raileanu -
2022 Poster: Exploration via Elliptical Episodic Bonuses »
Mikael Henaff · Roberta Raileanu · Minqi Jiang · Tim Rocktäschel