Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: Generalization in Planning (GenPlan '23)

Explore to Generalize in Zero-Shot RL

Ev Zisselman · Itai Lavie · Daniel Soudry · Aviv Tamar

Keywords: [ generalization ] [ Reinforcement Learning ] [ State Space Maximum Entropy Exploration ]

[ ] [ Project Page ]
Sat 16 Dec 7:15 a.m. PST — 7:25 a.m. PST

Abstract:

We study zero-shot generalization in reinforcement learning - optimize a policy on a set of training tasks such that it will perform well on a similar but unseen test task. To mitigate overfitting, previous work explored different notions of invariance to the task. However, on problems such as the ProcGen Maze, an adequate solution that is invariant to the task visualization does not exist, and therefore invariance-based approaches fail. Our insight is that learning a policy that explores the domain effectively is harder to memorize than a policy that maximizes reward for a specific task, and therefore we expect such learned behavior to generalize well; we indeed demonstrate this empirically on several domains that are difficult for invariance-based approaches. Our Explore to Generalize algorithm (ExpGen) builds on this insight: We train an additional ensemble of agents that optimize reward. At test time, either the ensemble agrees on an action, and we generalize well, or we take exploratory actions, which are guaranteed to generalize and drive us to a novel part of the state space, where the ensemble may potentially agree again. We show that our approach is the state-of-the-art on several tasks in the ProcGen challenge that have so far eluded effective generalization. For example, we demonstrate a success rate of 82% on the Maze task and 74% on Heist with 200 training levels.

Chat is not available.