Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo)

DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines

Omar Khattab · Arnav Singhvi · Paridhi Maheshwari · Zhiyuan Zhang · Keshav Santhanam · Sri Vardhamanan A · Saiful Haq · Ashutosh Sharma · Thomas Joshi · Hanna Moazam · Heather Miller · Matei A Zaharia · Christopher Potts


Abstract:

The ML community is rapidly exploring techniques for prompting language models (LMs), but existing LM pipelines often rely on hard-coded “prompt templates” discovered via trial and error. We introduce DSPy, a programming model that abstracts LM pipelines as imperative computation graphs where LMs are invoked through declarative modules. DSPy modules are parameterized so they can learn to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies and show that a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting and pipelines with expert-created demonstrations.

Chat is not available.