Skip to yearly menu bar Skip to main content


Poster

Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance

Josh McClellan · Naveed Haghani · John Winder · Furong Huang · Pratap Tokekar

West Ballroom A-D #6103
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Multi-Agent Reinforcement Learning (MARL) struggles with sample inefficiency and poor generalization . These challenges are partially due to a lack of structure or inductive bias in the neural networks typically used in learning the policy. One such form of structure that is commonly observed in multi-agent scenarios is symmetry. The field of Geometric Deep Learning has developed Equivariant Graph Neural Networks (EGNN) that are equivariant (or symmetric) to rotations, translations, and reflections of nodes. Incorporating equivariance has been shown to improve learning efficiency and decrease error . In this paper, we demonstrate that EGNNs improve the sample efficiency and generalization in MARL. However, we also show that a naive application of EGNNs to MARL results in poor early exploration due to a bias in the EGNN structure. To mitigate this bias, we present \textit{Exploration-enhanced Equivariant Graph Neural Networks} or E2GN2. We compare E2GN2 to other common function approximators using common MARL benchmarks MPE and SMACv2. E2GN2 demonstrates up to 10x improvement in sample efficiency, greater final reward convergence, and up to 5x gain in generalization over standard GNNs. These results pave the way for more reliable and effective solutions in complex multi-agent systems.

Live content is unavailable. Log in and register to view live content