Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Multi-Agent Security: Security as Key to AI Safety

Robustness to Multi-Modal Environment Uncertainty in MARL using Curriculum Learning

Aakriti Agrawal · Rohith Aralikatti · Yanchao Sun · Furong Huang

Keywords: [ robustness ] [ Multi-Modal Uncertainty ] [ multi-agent reinforcement learning ]

[ ] [ Project Page ]
Sat 16 Dec 1:50 p.m. PST — 2 p.m. PST
 
presentation: Multi-Agent Security: Security as Key to AI Safety
Sat 16 Dec 7 a.m. PST — 3:30 p.m. PST

Abstract:

Multi-agent reinforcement learning (MARL) plays a pivotal role in tackling real-world challenges. However, the seamless transition of trained policies from simulations to real-world requires it to be robust to various environmental uncertainties. Existing works focus on finding Nash Equilibrium or the optimal policy under uncertainty in a single environment variable (i.e. action, state or reward). This is because a multi-agent system is highly complex and non-stationary. However, in a real-world setting, uncertainty can occur in multiple environment variables simultaneously. This work is the first to formulate the generalised problem of robustness to multi-modal environment uncertainty in MARL. To this end, we propose a general robust training approach for multi-modal uncertainty based on curriculum learning techniques. We handle environmental uncertainty in more than one variable simultaneously and present extensive results across both cooperative and competitive MARL environments, demonstrating that our approach achieves state-of-the-art robustness.

Chat is not available.