Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: Generalization in Planning (GenPlan '23)

Reinforcement Learning with Augmentation Invariant Representation: A Non-contrastive Approach

Nasik Muhammad Nafi · William Hsu

Keywords: [ non-contrastive ] [ generalization ] [ Representation Learning ]

[ ] [ Project Page ]
Sat 16 Dec 7:05 a.m. PST — 7:15 a.m. PST

Abstract:

Data augmentation has been proven as an effective measure to improve generalization performance in reinforcement learning (RL). However, recent approaches directly use the augmented data to learn the value estimate or regularize the estimation, often ignoring the core essence that the model needs to learn that augmented data indeed represents the same state. In this work, we present \textbf{RAIR}: \textbf{R}einforcement learning with \textbf{A}ugmentation \textbf{I}nvariant \textbf{R}epresentation that disentangles the representation learning task from the RL task and aims to learn similar latent representations for the original observation and the augmented one. Our approach learns the representation of high-dimensional visual observations in a non-contrastive self-supervised way combined with the standard RL objective. In particular, RAIR gradually pushes the latent representation of an observation closer to the representation produced for the corresponding augmented observations. Thus, our agent is more robust to the changes in the environment. We evaluate RAIR on all sixteen environments from the RL generalization benchmark Procgen. The experimental results indicate that RAIR outperforms other data augmentation-based approaches under the standard generalization evaluation protocol.

Chat is not available.