Timezone: »

 
Contrastive Value Learning: Implicit Models for Simple Offline RL
Bogdan Mazoure · Benjamin Eysenbach · Ofir Nachum · Jonathan Tompson
Event URL: https://openreview.net/forum?id=-VxdxQULxOh »

Model-based reinforcement learning (RL) methods are appealing in the offline setting because they allow an agent to reason about the consequences of actions without interacting with the environment. Prior methods learn a 1-step dynamics model, which predicts the next state given the current state and action. These models do not immediately tell the agent which actions to take, but must be integrated into a larger RL framework. Can we model the environment dynamics in a different way, such that the learned model does directly indicate the value of each action? In this paper, we propose Contrastive Value Learning (CVL), which learns an implicit, multi-step model of the environment dynamics. This model can be learned without access to reward functions, but nonetheless can be used to directly estimate the value of each action, without requiring any TD learning. Because this model represents the multi-step transitions implicitly, it avoids having to predict high-dimensional observations and thus scales to high-dimensional tasks. Our experiments demonstrate that CVL outperforms prior offline RL methods on complex continuous control benchmarks.

Author Information

Bogdan Mazoure (McGill University, Google Brain)

Ph.D. student at MILA / McGill University, supervised by Doina Precup and Devon Hjelm. Interested in reinforcement learning, representation learning, mathematical statistics and density estimation.

Benjamin Eysenbach (CMU)
Benjamin Eysenbach

Assistant professor at Princeton working on self-supervised reinforcement learning (scaling, algorithms, theory, and applications).

Ofir Nachum (Google Brain)
Jonathan Tompson (Google Brain)

More from the Same Authors