Timezone: »

Autonomous Reinforcement Learning via Subgoal Curricula
Archit Sharma · Abhishek Gupta · Sergey Levine · Karol Hausman · Chelsea Finn

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @

Reinforcement learning (RL) promises to enable autonomous acquisition of complex behaviors for diverse agents. However, the success of current reinforcement learning algorithms is predicated on an often under-emphasised requirement -- each trial needs to start from a fixed initial state distribution. Unfortunately, resetting the environment to its initial state after each trial requires substantial amount of human supervision and extensive instrumentation of the environment which defeats the goal of autonomous acquisition of complex behaviors. In this work, we propose Value-accelerated Persistent Reinforcement Learning (VaPRL), which generates a curriculum of initial states such that the agent can bootstrap on the success of easier tasks to efficiently learn harder tasks. The agent also learns to reach the initial states proposed by the curriculum, minimizing the reliance on human interventions into the learning. We observe that VaPRL reduces the interventions required by three orders of magnitude compared to episodic RL while outperforming prior state-of-the art methods for reset-free RL both in terms of sample efficiency and asymptotic performance on a variety of simulated robotics problems.

Author Information

Archit Sharma (Stanford University)
Abhishek Gupta (University of California, Berkeley)
Sergey Levine (UC Berkeley)
Karol Hausman (Google Brain)
Chelsea Finn (Stanford)

More from the Same Authors