`

Timezone: »

 
Promoting Resilience of Multi-Agent Reinforcement Learning via Confusion-Based Communication
Ofir Abu · Sarah Keren · Matthias Gerstgrasser · Jeffrey S Rosenschein

Agents operating in real-world settings are often faced with the need to adapt to unexpected changes in their environment. Recent advances in multi-agent reinforcement learning (MARL) provide a variety of tools to support the ability of RL agents to deal with the dynamic nature of their environment, which may often be increased by the presence of other agents. In this work, we measure the resilience of a group of agents as the group’s ability to adapt to unexpected perturbations in the environment. To promote resilience, we suggest facilitating collaboration within the group, and offer a novel confusion-based communication protocol that requires an agent to broadcast its local observations that are least aligned with its previous experience. We present empirical evaluation of our approach on a set of simulated multi-taxi settings.

Author Information

Ofir Abu (Hebrew University of Jerusalem)
Sarah Keren (Technion, Technion)
Matthias Gerstgrasser (Harvard University)
Jeffrey S Rosenschein (The Hebrew University of Jerusalem)

More from the Same Authors