Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Robustness in Sequence Modeling

Quantifying Uncertainty in Foundation Models via Ensembles

Meiqi Sun · Wilson Yan · Pieter Abbeel · Igor Mordatch


Abstract:

As large-scale foundation models begin to have increasing impact in real-world applications, to guarantee reliability and trustworthiness it is important for these models to "know what they don't know": to be capable of quantifying uncertainty about their own outputs. In this work, we propose disagreement of model ensembles as an effective and compute-efficient method to quantify uncertainty. We also conduct a systematic study of uncertainty quantification spanning multiple tasks - a synthetic string task, and natural language arithmetic and question-answering tasks - over a progression of increasingly out of distribution inputs. We find that considering ensemble disagreement results in improved uncertainty prediction over only considering a single model's likelihood. We hope that our investigation and results encourage more research in the area of uncertainty quantification in foundation models and the use of model ensembles.

Chat is not available.