Timezone: »

Visual Reinforcement Learning with Imagined Goals
Ashvin Nair · Vitchyr Pong · Murtaza Dalal · Shikhar Bahl · Steven Lin · Sergey Levine

Wed Dec 05 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #141

For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised "practice" phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals in a real-world physical system, and substantially outperforms prior techniques.

Author Information

Ashvin Nair (UC Berkeley)
Vitchyr Pong (UC Berkeley)
Murtaza Dalal (University of California, Berkeley)
Shikhar Bahl (UC Berkeley)
Steven Lin (UC Berkeley)
Sergey Levine (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors