Timezone: »

 
Poster
Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning
Yiqin Yang · Xiaoteng Ma · Chenghao Li · Zewu Zheng · Qiyuan Zhang · Gao Huang · Jun Yang · Qianchuan Zhao

Thu Dec 09 04:30 PM -- 06:00 PM (PST) @

Learning from datasets without interaction with environments (Offline Learning) is an essential step to apply Reinforcement Learning (RL) algorithms in real-world scenarios.However, compared with the single-agent counterpart, offline multi-agent RL introduces more agents with the larger state and action space, which is more challenging but attracts little attention. We demonstrate current offline RL algorithms are ineffective in multi-agent systems due to the accumulated extrapolation error. In this paper, we propose a novel offline RL algorithm, named Implicit Constraint Q-learning (ICQ), which effectively alleviates the extrapolation error by only trusting the state-action pairs given in the dataset for value estimation. Moreover, we extend ICQ to multi-agent tasks by decomposing the joint-policy under the implicit constraint. Experimental results demonstrate that the extrapolation error is successfully controlled within a reasonable range and insensitive to the number of agents. We further show that ICQ achieves the state-of-the-art performance in the challenging multi-agent offline tasks (StarCraft II). Our code is public online at https://github.com/YiqinYang/ICQ.

Author Information

Yiqin Yang (Tsinghua University)
Xiaoteng Ma (Department of Automation, Tsinghua University)
Chenghao Li (Tsinghua University)
Zewu Zheng (Johns Hopkins University)
Qiyuan Zhang
Gao Huang (Tsinghua)
Jun Yang (Tsinghua University, Tsinghua University)
Qianchuan Zhao (Tsinghua University, Tsinghua University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors