Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Foundation Models for Decision Making

The Unsolved Challenges of LLMs in Open-Ended Web Tasks: A Case Study

Rim Assouel · Tom Marty · Massimo Caccia · Issam Hadj Laradji · Alexandre Drouin · Sai Rajeswar Mudumba · Hector Palacios · Quentin Cappart · David Vazquez · Nicolas Chapados · Maxime Gasse · Alexandre Lacoste

[ ] [ Project Page ]
 
presentation: Foundation Models for Decision Making
Fri 15 Dec 6:15 a.m. PST — 3:30 p.m. PST

Abstract:

In this work, we investigate the challenges associated with developing goal-driven AI agents capable of performing open-ended tasks in a web environment using zero-shot learning. Our primary focus is on harnessing the capabilities of large language models (LLMs) in the context of web navigation through HTML-based user interfaces (UIs). We evaluate the MiniWoB benchmark and show that it is a suitable yet challenging platform for assessing an agent's ability to comprehend and solve tasks without prior human demonstrations. Our main contribution encompasses a set of extensive experiments where we compare and contrast various agent design considerations, such as action space, observation space, and the choice of LLM, with the aim of shedding light on the bottlenecks and limitations of LLM-based zero-shot learning in this domain, in order to foster research endeavours in this area. In our empirical analysis, we find that: (1) the code-based action space is notably the most effective; (2) open-source LLMs hold their own as competitive agents for open-ended web tasks when compared to their proprietary counterparts; and (3) using an accessibility-based representation for web pages, despite resulting in some performance loss, emerges as a cost-effective strategy, particularly as web page sizes increase.

Chat is not available.