Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundation Models for Decision Making

Asking Clarifying Questions using Language Models and Probabilistic Reasoning

Top Piriyakulkij · Volodymyr Kuleshov · Kevin Ellis

[ ] [ Project Page ]
 
presentation: Foundation Models for Decision Making
Fri 15 Dec 6:15 a.m. PST — 3:30 p.m. PST

Abstract:

The ability to ask good, informative clarification questions is crucial for any AI system receiving human users inputs in natural language to be robust and reliable. Previous approaches to learn such interactive inference systems involve costly data collection and fall short in task generalization capability. While recent large language models (LLMs) might seem to be able to solve both problems with their impressive zero-shot learning ability, they turn out to be weak at asking good questions. We introduce an inference-time algorithm that helps LLMs output more informative questions. The algorithm relies on probability distributions defined by prompting an LLM and returns questions that optimize expected entropy and expected model change. Results in a simplified interactive web shopping setting with real product items show that an LLM equipped with our entropy reduction algorithm is superior to the baselines with the same underlying LLM.

Chat is not available.