Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Instruction Tuning and Instruction Following

Balancing Multiple Objectives for Efficient Metaprompts for Data Labeling Tasks with Extensive Guidelines

Tobias Schnabel · Jennifer Neville

Keywords: [ prompt optimization ] [ prompt compression ]


Abstract:

Spurred by ever increasing context-window sizes, two recent trends in the application of large language models (LLMs) for data annotation and pattern extraction are (i) longer prompts with complex structures, rich information and task instructions and (ii) the processing of many data points in the same prompt (minibatching) to increase query efficiency. In the process of annotating and analyzing data, the same metaprompts are re-used with many different inputs and are thus worth being optimized for length as billing is proportional to overall token usage. Traditional prompt optimization techniques are only insufficiently addressing those two trends: First, by ignoring the structure of prompts, they are limited in the transformation operations they can perform and second, they do not consider important factors such as input and output costs or adherence to output specifications.To overcome these limitations, we propose structure-aware multi-objective metaprompt optimization (SAMMO), a framework that automatically balances multiple objectives for high level prompt structures and encompasses several existing prompt optimization methods as special cases. Drawing from approaches for neural architecture search, SAMMO carries out a genetic search over a set of mutation operators that can change the structure and information contained in non-trivial ways. Empirically, we show on a wide range of annotation tasks that SAMMO succeeds in finding metaprompts that have over 30% fewer tokens while still as accurate as the baseline prompt.

Chat is not available.