Timezone: »

 
GPU-Podracer: Scalable and Elastic Library for Cloud-Native Deep Reinforcement Learning
Xiao-Yang Liu · Zhuoran Yang · Zhaoran Wang · Anwar Walid · Jian Guo · Michael Jordan
Event URL: https://openreview.net/forum?id=DEgq2LODPo »
Deep reinforcement learning (DRL) has revolutionized learning and actuation in applications such as game playing and robotic control. The cost of data collection, i.e., generating transitions from agent-environment interactions, remains a major challenge for wider DRL adoption in complex real-world problems. Following a cloud-native paradigm to train DRL agents on a GPU cloud platform is a promising solution. In this paper, we present a scalable and elastic library \textit{GPU-podracer} for cloud-native deep reinforcement learning, which efficiently utilizes millions of GPU cores to carry out massively parallel agent-environment interactions. At a high-level, GPU-podracer employs a tournament-based ensemble scheme to orchestrate the training process on hundreds or even thousands of GPUs, scheduling the interactions between a leaderboard and a training pool with hundreds of pods. At a low-level, each pod simulates agent-environment interactions in parallel by fully utilizing nearly $7,000$ GPU CUDA cores in a single GPU. Our GPU-podracer library features high scalability, elasticity and accessibility by following the development principles of containerization, microservices and MLOps. Using an NVIDIA DGX SuperPOD cloud, we conduct extensive experiments on various tasks in locomotion and stock trading and show that GPU-podracer outperforms Stable Baseline3 and RLlib, e.g., GPU-podracer achieves nearly linear scaling.

Author Information

Xiao-Yang Liu (Columbia University)
Zhuoran Yang (Princeton)
Zhaoran Wang (Princeton University)
Anwar Walid (Nokia Bell Labs)
Jian Guo
Michael Jordan (UC Berkeley)

More from the Same Authors