MAVEN: Multi-Agent Variational Exploration
Anuj Mahajan · Tabish Rashid · Mikayel Samvelyan · Shimon Whiteson

Wed Dec 11th 05:00 -- 07:00 PM @ East Exhibition Hall B + C #199

Centralised training with decentralised execution is an important setting for cooperative deep multi-agent reinforcement learning due to communication constraints during execution and computational tractability in training. In this paper, we analyse value-based methods that are known to have superior performance in complex environments. We specifically focus on QMIX, the current state-of-the-art in this domain. We show that the representation constraints on the joint action-values introduced by QMIX and similar methods lead to provably poor exploration and suboptimality. Furthermore, we propose a novel approach called MAVEN that hybridises value and policy-based methods by introducing a latent space for hierarchical control. The value-based agents condition their behaviour on the shared latent variable controlled by a hierarchical policy. This allows MAVEN to achieve committed, temporally extended exploration, which is key to solving complex multi-agent tasks. Our experimental results show that MAVEN achieves significant performance improvements on the challenging SMAC domain.

Author Information

Anuj Mahajan (University of Oxford)

Anuj is doing a PhD in machine learning at the University of Oxford. His research focuses on using deep learning, probabilistic inference and optimisation methods for single and multi-agent reinforcement learning. Anuj has done his undergrad in Computer Science from the Indian Institute of Technology, Delhi. His PhD is funded by the Google DeepMind Scholarship and Drapers Scholarship.

Tabish Rashid (University of Oxford)
Mikayel Samvelyan (Russian-Armenian University)
Shimon Whiteson (University of Oxford)

More from the Same Authors