Timezone: »

A Mean-Field Game Approach to Cloud Resource Management with Function Approximation
Weichao Mao · Haoran Qiu · Chen Wang · Hubertus Franke · Zbigniew Kalbarczyk · Ravishankar Iyer · Tamer Basar

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #734

Reinforcement learning (RL) has gained increasing popularity for resource management in cloud services such as serverless computing. As self-interested users compete for shared resources in a cluster, the multi-tenancy nature of serverless platforms necessitates multi-agent reinforcement learning (MARL) solutions, which often suffer from severe scalability issues. In this paper, we propose a mean-field game (MFG) approach to cloud resource management that is scalable to a large number of users and applications and incorporates function approximation to deal with the large state-action spaces in real-world serverless platforms. Specifically, we present an online natural actor-critic algorithm for learning in MFGs compatible with various forms of function approximation. We theoretically establish its finite-time convergence to the regularized Nash equilibrium under linear function approximation and softmax parameterization. We further implement our algorithm using both linear and neural-network function approximations, and evaluate our solution on an open-source serverless platform, OpenWhisk, with real-world workloads from production traces. Experimental results demonstrate that our approach is scalable to a large number of users and significantly outperforms various baselines in terms of function latency and resource utilization efficiency.

Author Information

Weichao Mao (University of Illinois Urbana-Champaign)
Haoran Qiu (UIUC)
Chen Wang (International Business Machines)
Hubertus Franke (IBM Research)
Zbigniew Kalbarczyk (University of Illinois at Urbana-Champaign)
Ravishankar Iyer
Tamer Basar (University of Illinois at Urbana-Champaign)

More from the Same Authors