Timezone: »

Learning in Congestion Games with Bandit Feedback
Qiwen Cui · Zhihan Xiong · Maryam Fazel · Simon Du

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #740

In this paper, we investigate Nash-regret minimization in congestion games, a class of games with benign theoretical structure and broad real-world applications. We first propose a centralized algorithm based on the optimism in the face of uncertainty principle for congestion games with (semi-)bandit feedback, and obtain finite-sample guarantees. Then we propose a decentralized algorithm via a novel combination of the Frank-Wolfe method and G-optimal design. By exploiting the structure of the congestion game, we show the sample complexity of both algorithms depends only polynomially on the number of players and the number of facilities, but not the size of the action set, which can be exponentially large in terms of the number of facilities. We further define a new problem class, Markov congestion games, which allows us to model the non-stationarity in congestion games. We propose a centralized algorithm for Markov congestion games, whose sample complexity again has only polynomial dependence on all relevant problem parameters, but not the size of the action set.

Author Information

Qiwen Cui (Department of Computer Science, University of Washington)
Zhihan Xiong (University of Washington)
Maryam Fazel (University of Washington)
Simon Du (University of Washington)

More from the Same Authors