Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundation Models for Decision Making

Prospector: Improving LLM Agents with Self-Asking and Trajectory Ranking

Byoungjip Kim · Youngsoo Jang · Lajanugen Logeswaran · Geon-Hyeong Kim · Yu Jin Kim · Honglak Lee · Moontae Lee

[ ] [ Project Page ]
 
presentation: Foundation Models for Decision Making
Fri 15 Dec 6:15 a.m. PST — 3:30 p.m. PST

Abstract:

Large language models (LLMs) have shown the ability to solve complex decision-making tasks beyond the natural language processing tasks. Current LLM agents such as ReAct can solve interactive decision-making tasks by imitating the few-shot demonstrations given in the prompt. The LLM agents based on few-shot in-context learning (ICL) achieve surprisingly high performance without training. Despite the simplicity and generalizability, the ICL-based approaches lack optimizing trajectories based on the reward from an environment. In this paper, we introduce Prospector, a reflective LLM agent that features Self-Asking and Trajectory Ranking. To elicit the LLM agent to generate more proper actions that contribute to following a given instruction, we introduce additional Self-Asking steps in the few-shot demonstrations. Furthermore, to take advantages of the stochastic generation of LLMs, we provide Trajectory Ranking in which the LLM agent generates diverse (creative) trajectories and the most rewarding trajectory is selected by using the reward prediction models. On the representative decision-making benchmark environments such as ALFWorld and WebShop, we empirically demonstrate that Prospector can considerably increase the success rate of given tasks, while outperforming recent advancements such as ReAct and Reflexion.

Chat is not available.