Accurate and Nuanced Open-QA Evaluation Through Textual Entailment

Peiran Yao, Denilson Barbosa


Abstract
Open-domain question answering (Open-QA) is a common task for evaluating large language models (LLMs). However, current Open-QA evaluations are criticized for the ambiguity in questions and the lack of semantic understanding in evaluators. Complex evaluators, powered by foundation models or LLMs and pertaining to semantic equivalence, still deviate from human judgments by a large margin. We propose to study the entailment relations of answers to identify more informative and more general system answers, offering a much closer evaluation to human judgment on both NaturalQuestions and TriviaQA while being learning-free. The entailment-based evaluation we propose allows the assignment of bonus or partial marks by quantifying the inference gap between answers, enabling a nuanced ranking of answer correctness that has higher AUC than current methods.
Anthology ID:
2024.findings-acl.151
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2575–2587
Language:
URL:
https://aclanthology.org/2024.findings-acl.151
DOI:
Bibkey:
Cite (ACL):
Peiran Yao and Denilson Barbosa. 2024. Accurate and Nuanced Open-QA Evaluation Through Textual Entailment. In Findings of the Association for Computational Linguistics ACL 2024, pages 2575–2587, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Accurate and Nuanced Open-QA Evaluation Through Textual Entailment (Yao & Barbosa, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.151.pdf