Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo)

AutoMix: Mixing Models with Few-shot Self and Meta Verification

Aman Madaan · Pranjal Aggarwal · Ankit Anand · Srividya Pranavi Potharaju · Swaroop Mishra · Pei Zhou · Aditya Gupta · Dheeraj Rajagopal · Yiming Yang · Shyam Upadhyay · - Mausam · Manaal Faruqui


Abstract:

Large language models (LLMs) are now available in various sizes and configurations from cloud API providers. While this diversity offers a broad spectrum of choices, effectively leveraging the options to optimize computational cost and performance remains challenging. In this work, we present AutoMix, an approach that strategically routes queries to larger LMs, based on the approximate correctness of outputs from a smaller LM. Central to AutoMix is a few-shot self-verification mechanism, which estimates the reliability of its own outputs without requiring training. Given that verifications can be noisy, we employ a meta verifier in \ours to refine the accuracy of these assessments. Our experiments using LLAMA2-13B and LLAMA2-70B, on five context-grounded reasoning datasets demonstrate that AutoMix surpasses established baselines, improving the incremental benefit per cost by up to 57%.

Chat is not available.