Skip to yearly menu bar Skip to main content


Poster

On the Importance of Exploration for Generalization in Reinforcement Learning

Yiding Jiang · J. Zico Kolter · Roberta Raileanu

Great Hall & Hall B1+B2 (level 1) #1912
[ ]
Thu 14 Dec 3 p.m. PST — 5 p.m. PST

Abstract:

Existing approaches for improving generalization in deep reinforcement learning (RL) have mostly focused on representation learning, neglecting RL-specific aspects such as exploration. We hypothesize that the agent's exploration strategy plays a key role in its ability to generalize to new environments.Through a series of experiments in a tabular contextual MDP, we show that exploration is helpful not only for efficiently finding the optimal policy for the training environments but also for acquiring knowledge that helps decision making in unseen environments. Based on these observations, we propose EDE: Exploration via Distributional Ensemble, a method that encourages the exploration of states with high epistemic uncertainty through an ensemble of Q-value distributions. The proposed algorithm is the first value-based approach to achieve strong performance on both Procgen and Crafter, two benchmarks for generalization in RL with high-dimensional observations. The open-sourced implementation can be found at https://github.com/facebookresearch/ede.

Chat is not available.