Bayesian Calibration of Win Rate Estimation with LLM Evaluators

Yicheng Gao, Gonghan Xu, Zhe Wang, Arman Cohan


Abstract
Recent advances in large language models (LLMs) show the potential of using LLMs as evaluators for assessing the quality of text generations from LLMs. However, applying LLM evaluators naively to compare different systems can lead to unreliable results due to the inaccuracy and intrinsic bias of LLM evaluators. In order to mitigate this problem, we propose two calibration methods, Bayesian Win-Rate Sampling (BWRS) and Bayesian Dawid-Skene, both of which leverage Bayesian inference to more accurately infer the true win rate of generative language models. We empirically validate our methods on six datasets covering story generation, summarization, and instruction following tasks. We show that both our methods are effective in improving the accuracy of win rate estimation using LLMs as evaluators, offering a promising direction for reliable automatic text quality evaluation.
Anthology ID:
2024.emnlp-main.273
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4757–4769
Language:
URL:
https://aclanthology.org/2024.emnlp-main.273
DOI:
Bibkey:
Cite (ACL):
Yicheng Gao, Gonghan Xu, Zhe Wang, and Arman Cohan. 2024. Bayesian Calibration of Win Rate Estimation with LLM Evaluators. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 4757–4769, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Bayesian Calibration of Win Rate Estimation with LLM Evaluators (Gao et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.273.pdf
Data:
 2024.emnlp-main.273.data.tgz