Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundation Models for Decision Making

LLM Augmented Hierarchical Agents

Bharat Prakash · Tim Oates · Tinoosh Mohsenin

[ ] [ Project Page ]
 
presentation: Foundation Models for Decision Making
Fri 15 Dec 6:15 a.m. PST — 3:30 p.m. PST

Abstract:

Solving long horizon temporally extended tasks using Reinforcement Learning (RL) is extremely challenging, compounded by the common practice of learning without prior knowledge (or tabula rasa learning). Humans can generate and execute plans with temporally extended actions and learn to perform new tasks because we almost never solve problems from scratch. We want autonomous agents to have the same capabilities. Recently, LLMs have shown to encode tremendous amount of knowledge about the world and impressive in-context learning and reasoning capabilities. However, using LLMs to solve real world tasks is challenging as these models are not grounded in the current task. We want to leverage the planning capabilities of LLMs while using RL to provide the essential environment interaction. In this paper, we present a hierarchical agent which uses LLMs to solve long horizon tasks. Instead of completely relying on LLMs, we use them to guide the high-level policy making them significantly more sample efficient. We evaluate our method on simulation environments such as MiniGrid, SkillHack, Crafter and on a real robot arm in block manipulation tasks. We show that agents trained using our method outperform other baselines methods and once trained, they don't depend on LLMs during deployment.

Chat is not available.