DEBATE: Devil’s Advocate-Based Assessment and Text Evaluation

Alex Kim, Keonwoo Kim, Sangwon Yoon


Abstract
As natural language generation (NLG) models have become prevalent, systematically assessing the quality of machine-generated texts has become increasingly important. Recent studies introduce LLM-based evaluators that operate as reference-free metrics, demonstrating their capability to adeptly handle novel tasks. However, these models generally rely on a single-agent approach, which, we argue, introduces an inherent limit to their performance. This is because there exist biases in LLM agent’s responses, including preferences for certain text structure or content. In this work, we propose DEBATE, an NLG evaluation framework based on multi-agent scoring system augmented with a concept of Devil’s Advocate. Within the framework, one agent is instructed to criticize other agents’ arguments, potentially resolving the bias in LLM agent’s answers. DEBATE substantially outperforms the previous state-of-the-art methods in two meta-evaluation benchmarks in NLG evaluation, SummEval and TopicalChat. We also show that the extensiveness of debates among agents and the persona of an agent can influence the performance of evaluators.
Anthology ID:
2024.findings-acl.112
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1885–1897
Language:
URL:
https://aclanthology.org/2024.findings-acl.112
DOI:
10.18653/v1/2024.findings-acl.112
Bibkey:
Cite (ACL):
Alex Kim, Keonwoo Kim, and Sangwon Yoon. 2024. DEBATE: Devil’s Advocate-Based Assessment and Text Evaluation. In Findings of the Association for Computational Linguistics ACL 2024, pages 1885–1897, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
DEBATE: Devil’s Advocate-Based Assessment and Text Evaluation (Kim et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.112.pdf