Timezone: »

Settling the Variance of Multi-Agent Policy Gradients
Jakub Grudzien Kuba · Muning Wen · Linghui Meng · shangding gu · Haifeng Zhang · David Mguni · Jun Wang · Yaodong Yang

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @ Virtual

Policy gradient (PG) methods are popular reinforcement learning (RL) methods where a baseline is often applied to reduce the variance of gradient estimates. In multi-agent RL (MARL), although the PG theorem can be naturally extended, the effectiveness of multi-agent PG (MAPG) methods degrades as the variance of gradient estimates increases rapidly with the number of agents. In this paper, we offer a rigorous analysis of MAPG methods by, firstly, quantifying the contributions of the number of agents and agents' explorations to the variance of MAPG estimators. Based on this analysis, we derive the optimal baseline (OB) that achieves the minimal variance. In comparison to the OB, we measure the excess variance of existing MARL algorithms such as vanilla MAPG and COMA. Considering using deep neural networks, we also propose a surrogate version of OB, which can be seamlessly plugged into any existing PG methods in MARL. On benchmarks of Multi-Agent MuJoCo and StarCraft challenges, our OB technique effectively stabilises training and improves the performance of multi-agent PPO and COMA algorithms by a significant margin. Code is released at \url{https://github.com/morning9393/Optimal-Baseline-for-Multi-agent-Policy-Gradients}.

Author Information

Jakub Grudzien Kuba (Huawei Technologies Ltd.)

AI Researcher in Huawei Technologies. MSc Student at the University of Oxford. BSc in Mathematics with Mathematical Computation at Imperial College London.

Muning Wen (Shanghai Jiao Tong University)
Linghui Meng (Institute of automation, Chinese Academy of Sciences)
shangding gu (Technical University of Munich)
Haifeng Zhang (Institute of automation, Chinese academy of science, Chinese Academy of Sciences)
David Mguni (PROWLER.io)

Senior Machine Learning Researcher, PROWLER.io Mar, 2017 -

Jun Wang (University College London)
Yaodong Yang (University College London)

More from the Same Authors