Timezone: »

 
Poster
Grammar Prompting for Domain-Specific Language Generation with Large Language Models
Bailin Wang · Zi Wang · Xuezhi Wang · Yuan Cao · Rif A. Saurous · Yoon Kim

Tue Dec 12 03:15 PM -- 05:15 PM (PST) @ Great Hall & Hall B1+B2 #329

Large language models (LLMs) can learn to perform a wide range of natural language tasks from just a handful of in-context examples. However, for generating strings from highly structured languages (e.g., semantic parsing to complex domain-specific languages), it is challenging for the LLM to generalize from just a few exemplars. We propose \emph{grammar prompting}, a simple approach to enable LLMs to use external knowledge and domain-specific constraints, expressed through a grammar in Backus--Naur Form (BNF), during in-context learning. Grammar prompting augments each demonstration example with a specialized grammar that is minimally sufficient for generating the particular output example, where the specialized grammar is a subset of the full DSL grammar. For inference, the LLM first predicts a BNF grammar given a test input, and then generates the output according to the rules of the grammar. Experiments demonstrate that grammar prompting can enable LLMs to perform competitively on a diverse set of DSL generation tasks, including semantic parsing (SMCalFlow, Overnight, GeoQuery), PDDL planning, and SMILES-based molecule generation.

Author Information

Bailin Wang (Massachusetts Institute of Technology)
Zi Wang (Google DeepMind)
Xuezhi Wang (Google)
Yuan Cao (Google Brain)
Rif A. Saurous (Google)
Yoon Kim (Massachusetts Institute of Technology)

More from the Same Authors