Skip to yearly menu bar Skip to main content


Poster

Evaluating Open-QA Evaluation

Cunxiang Wang · Sirui Cheng · Qipeng Guo · Yuanhao Yue · Bowen Ding · Zhikun Xu · Yidong Wang · Xiangkun Hu · Zheng Zhang · Yue Zhang

Great Hall & Hall B1+B2 (level 1) #1911

Abstract:

This study focuses on the evaluation of the Open Question Answering (Open-QA) task, which can directly estimate the factuality of large language models (LLMs). Current automatic evaluation methods have shown limitations, indicating that human evaluation still remains the most reliable approach. We introduce a new task, QA Evaluation (QA-Eval) and the corresponding dataset EVOUNA, designed to assess the accuracy of AI-generated answers in relation to standard answers within Open-QA. Our evaluation of these methods utilizes human-annotated results to measure their performance. Specifically, the work investigates methods that show high correlation with human evaluations, deeming them more reliable. We also discuss the pitfalls of current methods and methods to improve LLM-based evaluators. We believe this new QA-Eval task and corresponding dataset EVOUNA will facilitate the development of more effective automatic evaluation tools and prove valuable for future research in this area. All resources are available at https://github.com/wangcunxiang/QA-Eval and it is under the Apache-2.0 License.

Chat is not available.