Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Reinforcement Learning Workshop

Value-based CTDE Methods in Symmetric Two-team Markov Game: from Cooperation to Team Competition

Pascal Leroy · Jonathan Pisane · Damien Ernst


Abstract:

In this paper, we identify the best training scenario to train a team of agents to compete against multiple possible strategies of opposing teams.We restrict ourselves to the case of a symmetric two-team Markov game which is a competition between two symmetric teams.We evaluate cooperative value-based methods in a mixed cooperative-competitive environment.We selected three training methods based on the centralised training and decentralised execution (CTDE) paradigm: QMIX, MAVEN and QVMix.To train such teams, we modified the StarCraft Multi-Agent Challenge environment to create competitive scenarios where both teams could learn and compete simultaneously in a partially observable environment.For each method, we considered three learning scenarios differentiated by the variety of team policies encountered during training.Our results suggest that training against multiple evolving strategies achieves the best results when, for scoring their performances, teams are faced with several strategies, whether the stationary strategy is better than all trained teams or not.

Chat is not available.