Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Offline Reinforcement Learning

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning

Tim Pearce · Jun Zhu


Abstract: This paper describes an AI agent that plays the modern first-person-shooter (FPS) video game `Counter-Strike; Global Offensive' (CSGO) from pixel input. The agent, a deep neural network, matches the performance of the medium difficulty built-in AI on the deathmatch game mode whilst adopting a humanlike play style. Previous research has mostly focused on games with convenient APIs and low-resolution graphics, allowing them to be run cheaply at scale. This is not the case for CSGO, with system requirements 100$\times$ that of previously studied FPS games. This limits the quantity of on-policy data that can be generated, precluding many reinforcement learning algorithms. Our solution uses behavioural cloning — training on a large noisy dataset scraped from human play on online servers (5.5 million frames or 95 hours), and smaller datasets of clean expert demonstrations. This scale is an order of magnitude larger than prior work on imitation learning in FPS games. To introduce this challenging environment to the AI community, we open source code and datasets.

Chat is not available.