Timezone: »
Potential games are arguably one of the most important and widely studied classes of normal form games. They define the archetypal setting of multi-agent coordination as all agent utilities are perfectly aligned with each other via a common potential function. Can this intuitive framework be transplanted in the setting of Markov Games? What are the similarities and differences between multi-agent coordination with and without state dependence? We present a novel definition of Markov Potential Games (MPG) that generalizes prior attempts at capturing complex stateful multi-agent coordination. Counter-intuitively, insights from normal-form potential games do not carry over as MPGs can consist of settings where state-games can be zero-sum games. In the opposite direction, Markov games where every state-game is a potential game are not necessarily MPGs. Nevertheless, MPGs showcase standard desirable properties such as the existence of deterministic Nash policies. In our main technical result, we prove fast convergence of independent policy gradient to Nash policies by adapting recent gradient dominance property arguments developed for single agent MDPs to multi-agent learning settings.
Author Information
Stefanos Leonardos (Singapore University of Technology and Design)
Will Overman (University of California, Irvine)
Ioannis Panageas (UC Irvine)
Georgios Piliouras (Singapore University of Technology and Design)
More from the Same Authors
-
2021 Spotlight: Exploration-Exploitation in Multi-Agent Competition: Convergence with Bounded Rationality »
Stefanos Leonardos · Georgios Piliouras · Kelly Spendlove -
2021 : Learning in Matrix Games can be Arbitrarily Complex »
Gabriel Andrade · Rafael Frongillo · Georgios Piliouras -
2021 : Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games »
Stefanos Leonardos · Will Overman · Ioannis Panageas · Georgios Piliouras -
2021 : Exploration-Exploitation in Multi-Agent Competition: Convergence with Bounded Rationality »
Stefanos Leonardos · Kelly Spendlove · Georgios Piliouras -
2021 : Learning in Matrix Games can be Arbitrarily Complex »
Gabriel Andrade · Rafael Frongillo · Georgios Piliouras -
2021 : Exploration-Exploitation in Multi-Agent Competition: Convergence with Bounded Rationality »
Stefanos Leonardos · Kelly Spendlove · Georgios Piliouras -
2021 Poster: Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent »
Emmanouil-Vasileios Vlatakis-Gkaragkounis · Lampros Flokas · Georgios Piliouras -
2021 Poster: Exploration-Exploitation in Multi-Agent Competition: Convergence with Bounded Rationality »
Stefanos Leonardos · Georgios Piliouras · Kelly Spendlove -
2021 Poster: Online Learning in Periodic Zero-Sum Games »
Tanner Fiez · Ryann Sim · Stratis Skoulakis · Georgios Piliouras · Lillian Ratliff -
2019 Poster: First-order methods almost always avoid saddle points: The case of vanishing step-sizes »
Ioannis Panageas · Georgios Piliouras · Xiao Wang -
2019 Poster: Multiagent Evaluation under Incomplete Information »
Mark Rowland · Shayegan Omidshafiei · Karl Tuyls · Julien Perolat · Michal Valko · Georgios Piliouras · Remi Munos -
2019 Spotlight: Multiagent Evaluation under Incomplete Information »
Mark Rowland · Shayegan Omidshafiei · Karl Tuyls · Julien Perolat · Michal Valko · Georgios Piliouras · Remi Munos -
2017 Poster: Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos »
Gerasimos Palaiopanos · Ioannis Panageas · Georgios Piliouras -
2017 Spotlight: Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos »
Gerasimos Palaiopanos · Ioannis Panageas · Georgios Piliouras