Timezone: »

 
Poster
Distributional Reward Estimation for Effective Multi-agent Deep Reinforcement Learning
Jifeng Hu · Yanchao Sun · Hechang Chen · Sili Huang · haiyin piao · Yi Chang · Lichao Sun

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #223

Multi-agent reinforcement learning has drawn increasing attention in practice, e.g., robotics and automatic driving, as it can explore optimal policies using samples generated by interacting with the environment. However, high reward uncertainty still remains a problem when we want to train a satisfactory model, because obtaining high-quality reward feedback is usually expensive and even infeasible. To handle this issue, previous methods mainly focus on passive reward correction. At the same time, recent active reward estimation methods have proven to be a recipe for reducing the effect of reward uncertainty. In this paper, we propose a novel Distributional Reward Estimation framework for effective Multi-Agent Reinforcement Learning (DRE-MARL). Our main idea is to design the multi-action-branch reward estimation and policy-weighted reward aggregation for stabilized training. Specifically, we design the multi-action-branch reward estimation to model reward distributions on all action branches. Then we utilize reward aggregation to obtain stable updating signals during training. Our intuition is that consideration of all possible consequences of actions could be useful for learning policies. The superiority of the DRE-MARL is demonstrated using benchmark multi-agent scenarios, compared with the SOTA baselines in terms of both effectiveness and robustness.

Author Information

Jifeng Hu (Jilin University)
Yanchao Sun (University of Maryland, College Park)
Hechang Chen (Jilin University)
Sili Huang (Jilin University)
haiyin piao (Northwestern Polytechnical University)
Yi Chang (Jilin University)
Lichao Sun (Lehigh University)

More from the Same Authors