Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Foundation Models for Decision Making

Skill Reinforcement Learning and Planning for Open-World Long-Horizon Tasks

昊琦 袁 · Chi Zhang · Hongcheng Wang · Feiyang Xie · Penglin Cai · Hao Dong · Zongqing Lu

[ ] [ Project Page ]
 
presentation: Foundation Models for Decision Making
Fri 15 Dec 6:15 a.m. PST — 3:30 p.m. PST

Abstract:

We study building an agent that solves diverse long-horizon tasks in open-world environments. Without human demonstrations, learning to accomplish tasks in a large open-world environment with reinforcement learning (RL) is extremely inefficient. To tackle this challenge, we convert the multi-task learning problem into learning basic skills and planning over the skills, and propose a Finding-skill to improve the sample efficiency for training all the skills. Using the popular open-world game Minecraft as the testbed, we propose three types of fine-grained basic skills, and use RL with intrinsic rewards to acquire skills with high success rates. For skill planning, we leverage the prior knowledge in Large Language Models to find the relationships between skills and build a skill graph. When the agent is solving a task, our skill search algorithm walks on the skill graph and generates the proper skill plans for the agent. In experiments, our method accomplishes 40 diverse Minecraft tasks, where many tasks require sequentially executing for more than 10 skills. Our method outperforms baselines by a large margin and is the most sample-efficient demonstration-free RL method to solve Minecraft Tech Tree tasks.

Chat is not available.