Skip to yearly menu bar Skip to main content


Poster

Shield Decentralization for Safe Multi-Agent Reinforcement Learning

Daniel Melcer · Christopher Amato · Stavros Tripakis

Hall J (level 1) #613

Keywords: [ multi agent ] [ synthesis ] [ safety ] [ shielding ] [ Reinforcement Learning ]


Abstract:

Learning safe solutions is an important but challenging problem in multi-agent reinforcement learning (MARL). Shielded reinforcement learning is one approach for preventing agents from choosing unsafe actions. Current shielded reinforcement learning methods for MARL make strong assumptions about communication and full observability. In this work, we extend the formalization of the shielded reinforcement learning problem to a decentralized multi-agent setting. We then present an algorithm for decomposition of a centralized shield, allowing shields to be used in such decentralized, communication-free environments. Our results show that agents equipped with decentralized shields perform comparably to agents with centralized shields in several tasks, allowing shielding to be used in environments with decentralized training and execution for the first time.

Chat is not available.