Timezone: »
Deep reinforcement learning (RL) has shown great empirical successes, but suffers from brittleness and sample inefficiency. A potential remedy is to use a previously-trained policy as a source of supervision. In this work, we refer to these policies as teachers and study how to transfer their expertise to new student policies by focusing on data usage. We propose a framework, Data CUrriculum for Reinforcement learning (DCUR), which first trains teachers using online deep RL, and stores the logged environment interaction history. Then, students learn by running either offline RL or by using teacher data in combination with a small amount of self-generated data. DCUR’s central idea involves defining a class of data curricula which, as a function of training time, limits the student to sampling from a fixed subset of the full teacher data. We test teachers and students using state-of-the-art deep RL algorithms across a variety of data curricula. Results suggest that the choice of data curricula significantly impacts student learning, and that it is beneficial to limit the data during early training stages while gradually letting the data availability grow over time. We identify when the student can learn offline and match teacher performance without relying on specialized offline RL algorithms. Furthermore, we show that collecting a small fraction of online data provides complementary benefits with the data curriculum. Supplementary material is available at https://sites.google.com/view/anon-dcur/.
Author Information
Daniel Seita (UC Berkeley)
Abhinav Gopal (UC Berkeley)
Mandi Zhao (University of California, Berkeley)
John Canny (UC Berkeley)
More from the Same Authors
-
2022 Poster: On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning »
Mandi Zhao · Pieter Abbeel · Stephen James -
2021 Poster: Compressive Visual Representations »
Kuang-Huei Lee · Anurag Arnab · Sergio Guadarrama · John Canny · Ian Fischer -
2020 Poster: Predictive Information Accelerates Learning in RL »
Kuang-Huei Lee · Ian Fischer · Anthony Liu · Yijie Guo · Honglak Lee · John Canny · Sergio Guadarrama -
2016 : Invited Talk: Optimizing Machine Learning and Deep Learning (John Canny, UC Berkeley & Google Research) »
John Canny