Skip to yearly menu bar Skip to main content


Poster

Estimating the Hallucination Rate of Generative AI

Andrew Jesson · Nicolas Beltran Velez · Quentin Chu · Sweta Karlekar · Jannik Kossen · Yarin Gal · John Cunningham · David Blei

East Exhibit Hall A-C #2703
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

This work is about estimating the hallucination rate for in-context learning (ICL) with Generative AI. In ICL, a conditional generative model (CGM) is prompted with a dataset and asked to make a prediction based on that dataset. The Bayesian interpretation of ICL assumes that the CGM is calculating a posterior predictive distribution over an unknown Bayesian model of a latent parameter and data. With this perspective, we define a \textit{hallucination} as a generated prediction that has low-probability under the true latent parameter. We develop a new method that takes an ICL problem---that is, a CGM, a dataset, and a prediction question---and estimates the probability that a CGM will generate a hallucination. Our method only requires generating queries and responses from the model and evaluating its response log probability. We empirically evaluate our method on synthetic regression and natural language ICL tasks using large language models.

Live content is unavailable. Log in and register to view live content