Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Reinforcement Learning

Learning Two-Player Mixture Markov Games: Kernel Function Approximation and Correlated Equilibrium

Chris Junchi Li · Dongruo Zhou · Quanquan Gu · Michael Jordan


Abstract: We consider learning the Nash equilibrium in two-player Markov Games with nonlinear function approximation, where the action-value function is approximated by a function in the Reproducing Kernel Hilbert Space (RKHS) space. The key challenge is how to do exploration in the high-dimensional function space. We propose novel online learning algorithms to find the Nash Equilibrium by minimizing the duality gap. At the core of our algorithms are upper and lower confidence bounds that are derived based on the principle of optimism in the face of uncertainty. We prove that our algorithm is able to attain an $O(\sqrt{T})$ regret with polynomial computational complexity, under very mild assumptions on the reward function and the underlying dynamic of the Markov games. This work provides one of the first results of desirable complexities in the learning of two-player Markov games with nonlinear function approximation in the kernel mixture settings, and its implications for function approximation via deep neural networks.

Chat is not available.