Timezone: »

ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward
Zixian Ma · Rose Wang · Fei-Fei Li · Michael Bernstein · Ranjay Krishna

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #520

Modern multi-agent reinforcement learning frameworks rely on centralized training and reward shaping to perform well. However, centralized training and dense rewards are not readily available in the real world. Current multi-agent algorithms struggle to learn in the alternative setup of decentralized training or sparse rewards. To address these issues, we propose a self-supervised intrinsic reward \textit{ELIGN - expectation alignment - } inspired by the self-organization principle in Zoology. Similar to how animals collaborate in a decentralized manner with those in their vicinity, agents trained with expectation alignment learn behaviors that match their neighbors' expectations. This allows the agents to learn collaborative behaviors without any external reward or centralized training. We demonstrate the efficacy of our approach across 6 tasks in the multi-agent particle and the complex Google Research football environments, comparing ELIGN to sparse and curiosity-based intrinsic rewards. When the number of agents increases, ELIGN scales well in all multi-agent tasks except for one where agents have different capabilities. We show that agent coordination improves through expectation alignment because agents learn to divide tasks amongst themselves, break coordination symmetries, and confuse adversaries. These results identify tasks where expectation alignment is a more useful strategy than curiosity-driven exploration for multi-agent coordination, enabling agents to do zero-shot coordination.

Author Information

Zixian Ma (Computer Science Department, Stanford University)
Zixian Ma

I graduated from Stanford with BS and MS degrees in CS, where I did research in the Stanford Vision and Learning Lab and HCI group with Prof. Ranjay Krishna, Prof. Michael Bernstein, and Prof. Fei-Fei Li. I've had research and engineering experience in multi-agent collaboration under RL, large language/vision/multi-task models, and human-computer interaction. My current research interests lie in human-AI collaboration, vision-language/multimodal models as well as their compositionality and interpretability.

Rose Wang (Stanford)
Fei-Fei Li (Princeton University)
Michael Bernstein (Stanford University)
Ranjay Krishna (University of Washington)

More from the Same Authors