Timezone: »
Synchronizing decisions across multiple agents in realistic settings is problematic since it requires agents to wait for other agents to terminate and communicate about termination reliably. Ideally, agents should learn and execute asynchronously instead. Such asynchronous methods also allow temporally extended actions that can take different amounts of time based on the situation and action executed. Unfortunately, current policy gradient methods are not applicable in asynchronous settings, as they assume that agents synchronously reason about action selection at every time step. To allow asynchronous learning and decision-making, we formulate a set of asynchronous multi-agent actor-critic methods that allow agents to directly optimize asynchronous policies in three standard training paradigms: decentralized learning, centralized learning, and centralized training for decentralized execution. Empirical results (in simulation and hardware) in a variety of realistic domains demonstrate the superiority of our approaches in large multi-agent problems and validate the effectiveness of our algorithms for learning high-quality and asynchronous solutions.
Author Information
Yuchen Xiao (J.P. Morgan & Northeastern University)
Weihao Tan (University of Massachusetts, Amherst)
Christopher Amato (Northeastern University)
More from the Same Authors
-
2022 : Deep Transformer Q-Networks for Partially Observable Reinforcement Learning »
Kevin Esslinger · Robert Platt · Christopher Amato -
2022 Poster: Shield Decentralization for Safe Multi-Agent Reinforcement Learning »
Daniel Melcer · Christopher Amato · Stavros Tripakis -
2019 Poster: Reconciling λ-Returns with Experience Replay »
Brett Daley · Christopher Amato