Skip to yearly menu bar Skip to main content


Poster

Hokoff: Real Game Dataset from Honor of Kings and its Offline Reinforcement Learning Benchmarks

Yun Qu · Boyuan Wang · Jianzhun Shao · Yuhang Jiang · Chen Chen · Zhenbin Ye · Liu Linc · Yang Feng · Lin Lai · Hongyang Qin · Minwen Deng · Juchao Zhuo · Deheng Ye · Qiang Fu · YANG GUANG · Wei Yang · Lanxiao Huang · Xiangyang Ji

Great Hall & Hall B1+B2 (level 1) #1413
[ ] [ Project Page ]
[ Paper [ Poster [ OpenReview
Thu 14 Dec 3 p.m. PST — 5 p.m. PST

Abstract:

The advancement of Offline Reinforcement Learning (RL) and Offline Multi-Agent Reinforcement Learning (MARL) critically depends on the availability of high-quality, pre-collected offline datasets that represent real-world complexities and practical applications. However, existing datasets often fall short in their simplicity and lack of realism. To address this gap, we propose Hokoff, a comprehensive set of pre-collected datasets that covers both offline RL and offline MARL, accompanied by a robust framework, to facilitate further research. This data is derived from Honor of Kings, a recognized Multiplayer Online Battle Arena (MOBA) game known for its intricate nature, closely resembling real-life situations. Utilizing this framework, we benchmark a variety of offline RL and offline MARL algorithms. We also introduce a novel baseline algorithm tailored for the inherent hierarchical action space of the game. We reveal the incompetency of current offline RL approaches in handling task complexity, generalization and multi-task learning.

Chat is not available.