Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Table Representation Learning Workshop

How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings

Shuaichen Chang · Eric Fosler-Lussier

Keywords: [ Text-to-SQL ] [ Large language models ] [ Database Representation ] [ in-context learning ]

[ ] [ Project Page ]
Fri 15 Dec 12:08 p.m. PST — 12:15 p.m. PST
 
presentation: Table Representation Learning Workshop
Fri 15 Dec 6:30 a.m. PST — 3:30 p.m. PST

Abstract:

Large language models (LLMs) with in-context learning have demonstrated remarkable capability in the text-to-SQL task. Previous research has prompted LLMs with various demonstration-retrieval strategies and intermediate reasoning steps to enhance the performance of LLMs. However, those works often employ varied strategies when constructing the prompt text for text-to-SQL inputs, such as databases and demonstration examples. This leads to a lack of comparability in both the prompt constructions and their primary contributions. Furthermore, selecting an effective prompt construction has emerged as a persistent problem for future research. To address this limitation, we comprehensively investigate the impact of prompt constructions across various settings and provide insights into prompt constructions for future text-to-SQL studies.

Chat is not available.