Timezone: »

 
Democratizing RL Research by Reusing Prior Computation
Rishabh Agarwal

Sat Dec 03 09:15 AM -- 10:15 AM (PST) @
Event URL: https://openreview.net/forum?id=97m0ZmSl3z9 »

Learning tabula rasa, that is without any prior knowledge, is the prevalent workflow in reinforcement learning (RL) research. Unfortunately, the inefficiency of deep RL typically excludes researchers without access to industrial-scale resources from tackling computationally-demanding problems. Furthermore, as RL research moves toward more complex benchmarks, the computational barrier to entry would further increase. To address these issues, we present reincarnating RL as an alternative workflow or class of problem settings, where prior computational work (e.g., learned policies) is reused or transferred between design iterations of an RL agent, or from one RL agent to another. RRL can democratize research by allowing the broader community to tackle complex RL problems without requiring excessive computational resources. To demonstrate this, we present a case study on Atari games showing how superhuman Atari agents can be trained using only a few hours, as opposed to few days on a single GPU. Finally, we address reproducibility and generalizability concerns in this research workflow. Overall, this work argues for an alternate approach to RL research, which we believe could significantly improve real-world RL adoption and help democratize it further.

Author Information

Rishabh Agarwal (Google Research, Brain Team)

My research work mainly revolves around deep reinforcement learning (RL), often with the goal of making RL methods suitable for real-world problems, and includes an outstanding paper award at NeurIPS.

More from the Same Authors