Timezone: »
Exploring in environments with high-dimensional observations is hard. One promising approach for exploration is to use intrinsic rewards, which often boils down to estimating "novelty" of states, transitions, or trajectories with deep networks. Prior works have shown that conditional prediction objectives such as masked autoencoding can be seen as stochastic estimation of pseudo-likelihood. We show how this perspective naturally leads to a unified view on existing intrinsic reward approaches: they are special cases of conditional prediction, where the estimation of novelty can be seen as pseudo-likelihood estimation with different mask distributions. From this view, we propose a general framework for deriving intrinsic rewards -- Masked Input Modeling for Exploration (MIMEx) -- where the mask distribution can be flexibly tuned to control the difficulty of the underlying conditional prediction task. We demonstrate that MIMEx can achieve superior results when compared against competitive baselines on a suite of challenging sparse-reward visuomotor tasks.
Author Information
Toru Lin (University of California, Berkeley)
Allan Jabri (UC Berkeley)
More from the Same Authors
-
2023 Poster: Diffusion Self-Guidance for Controllable Image Generation »
Dave Epstein · Allan Jabri · Ben Poole · Alexei Efros · Aleksander Holynski -
2020 Poster: Space-Time Correspondence as a Contrastive Random Walk »
Allan Jabri · Andrew Owens · Alexei Efros -
2020 Oral: Space-Time Correspondence as a Contrastive Random Walk »
Allan Jabri · Andrew Owens · Alexei Efros -
2019 Poster: Unsupervised Curricula for Visual Meta-Reinforcement Learning »
Allan Jabri · Kyle Hsu · Abhishek Gupta · Benjamin Eysenbach · Sergey Levine · Chelsea Finn -
2019 Spotlight: Unsupervised Curricula for Visual Meta-Reinforcement Learning »
Allan Jabri · Kyle Hsu · Abhishek Gupta · Benjamin Eysenbach · Sergey Levine · Chelsea Finn