Language Models can Evaluate Themselves via Probability Discrepancy

Tingyu Xia, Bowen Yu, Yuan Wu, Yi Chang, Chang Zhou


Abstract
In this paper, we begin by illustrating that, when presented with a query, Large Language Models (LLMs) capable of providing accurate responses tend to exhibit a more uniform probability distribution compared to their less proficient counterparts. Building upon this observation, we introduce a novel self-assessment criterion termed ProbDiff for evaluating the performance of diverse LLMs. This method eliminates the need for training an additional evaluation model or relying on external proprietary models such as GPT-4 as a judger. Instead, it solely relies on the LLMs under evaluation to compute the probability discrepancy between the original response generation and its revised versions. A higher discrepancy in two LLMs for the same query suggests a relatively weaker ability. We discover that ProbDiff yields comparable results to mainstream GPT-4-based evaluations on various scenarios including NLG tasks like translation and summarization, as well as LLM evaluation benchmarks such as AlignBench, MT-Bench, and AlpacaEval, across LLMs of different sizes.
Anthology ID:
2024.findings-acl.291
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4889–4901
Language:
URL:
https://aclanthology.org/2024.findings-acl.291
DOI:
Bibkey:
Cite (ACL):
Tingyu Xia, Bowen Yu, Yuan Wu, Yi Chang, and Chang Zhou. 2024. Language Models can Evaluate Themselves via Probability Discrepancy. In Findings of the Association for Computational Linguistics ACL 2024, pages 4889–4901, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Language Models can Evaluate Themselves via Probability Discrepancy (Xia et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.291.pdf