Skip to yearly menu bar Skip to main content


Poster

Gorilla: Teaching LLMs to Use Tools

Shishir G Patil · Tianjun Zhang · Xin Wang · Joseph Gonzalez

East Exhibit Hall A-C #4925
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Large Language Models (LLMs) have seen an im-pressive wave of advances recently, with modelsnow excelling in a variety of tasks, such as mathe-matical reasoning and program synthesis. How-ever, their potential to effectively use tools viaAPI calls remains unfulfilled. This is a challeng-ing task even for today’s state-of- the-art LLMssuch as GPT-4 largely due to their unawarenessof what APIs are available and how to use themin a frequently updated toolset. We develop Go-rilla, a finetuned LLaMA model that surpassesthe performance of GPT-4 on writing API calls.When combined with a document retriever, Go-rilla demonstrates a strong capability to adaptto test-time document changes, enabling flexibleuser updates or version changes. It also substan-tially mitigates the issue of hallucination, com-monly encountered when prompting LLMs di-rectly. To evaluate the model’s ability, we intro-duce APIBench, a comprehensive dataset consist-ing of HuggingFace, TorchHub, and TensorHubAPIs. The successful integration of the retrievalsystem with Gorilla demonstrates the potentialfor LLMs to use tools more accurately, keep upwith frequently updated documentation, and con-sequently increase the reliability and applicabilityof their outputs. Gorilla’s code, model, and datawill be open-sourced.

Live content is unavailable. Log in and register to view live content