Skip to yearly menu bar Skip to main content


Oral Poster

ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings

Shibo Hao · Tianyang Liu · Zhen Wang · Zhiting Hu

Great Hall & Hall B1+B2 (level 1) #312
[ ]
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST
 
Oral presentation: Oral 3B NLP/Tools
Wed 13 Dec 8 a.m. PST — 8:45 a.m. PST

Abstract:

Integrating large language models (LLMs) with various tools has led to increased attention in the field. Existing approaches either involve fine-tuning the LLM, which is both computationally costly and limited to a fixed set of tools, or prompting LLMs by in-context tool demonstrations. Although the latter method offers adaptability to new tools, it struggles with the inherent context length constraint of LLMs when many new tools are presented, and mastering a new set of tools with few-shot examples remains challenging, resulting in suboptimal performance. To address these limitations, we propose a novel solution, named ToolkenGPT, wherein LLMs effectively learn to master tools as predicting tokens through tool embeddings for solving complex tasks. In this framework, each tool is transformed into vector embeddings and plugged into the language model head. Once the function is triggered during text generation, the LLM enters a special function mode to execute the tool calls. Our experiments show that function embeddings effectively help LLMs understand tool use and improve on several tasks, including numerical reasoning, knowledge-based question answering and embodied decision-making.

Chat is not available.