DnA-Eval: Enhancing Large Language Model Evaluation through Decomposition and Aggregation

Minzhi Li, Zhengyuan Liu, Shumin Deng, Shafiq Joty, Nancy Chen, Min-Yen Kan


Abstract
The acceleration of Large Language Models (LLMs) research has opened up new possibilities for evaluating generated text. Though LLMs serve as scalable and economical evaluators, how reliable these evaluators is still under-explored. Prior research efforts in the meta-evaluation of LLMs as judges limit the prompting of an LLM to a single use to obtain a final evaluation decision. They then compute the agreement between LLMs’ outputs and human labels. This lacks interpretability in understanding the evaluation capability of LLMs. In light of this challenge, we propose DnA-Eval, which breaks down the evaluation process into decomposition and aggregation stages based on pedagogical practices. Our experiments show that it not only provides a more interpretable window for how well LLMs evaluate, but also leads to improvements up to 39.6% for different LLMs on a variety of meta-evaluation benchmarks.
Anthology ID:
2025.coling-main.156
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2277–2290
Language:
URL:
https://aclanthology.org/2025.coling-main.156/
DOI:
Bibkey:
Cite (ACL):
Minzhi Li, Zhengyuan Liu, Shumin Deng, Shafiq Joty, Nancy Chen, and Min-Yen Kan. 2025. DnA-Eval: Enhancing Large Language Model Evaluation through Decomposition and Aggregation. In Proceedings of the 31st International Conference on Computational Linguistics, pages 2277–2290, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
DnA-Eval: Enhancing Large Language Model Evaluation through Decomposition and Aggregation (Li et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.156.pdf