Measuring the Robustness of Reference-Free Dialogue Evaluation Systems

Justin Vasselli, Adam Nohejl, Taro Watanabe


Abstract
Advancements in dialogue systems powered by large language models (LLMs) have outpaced the development of reliable evaluation metrics, particularly for diverse and creative responses. We present a benchmark for evaluating the robustness of reference-free dialogue metrics against four categories of adversarial attacks: speaker tag prefixes, static responses, ungrammatical responses, and repeated conversational context. We analyze metrics such as DialogRPT, UniEval, and PromptEval—a prompt-based method leveraging LLMs—across grounded and ungrounded datasets. By examining both their correlation with human judgment and susceptibility to adversarial attacks, we find that these two axes are not always aligned; metrics that appear to be equivalent when judged by traditional benchmarks may, in fact, vary in their scores of adversarial responses. These findings motivate the development of nuanced evaluation frameworks to address real-world dialogue challenges.
Anthology ID:
2025.coling-main.331
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4958–4972
Language:
URL:
https://aclanthology.org/2025.coling-main.331/
DOI:
Bibkey:
Cite (ACL):
Justin Vasselli, Adam Nohejl, and Taro Watanabe. 2025. Measuring the Robustness of Reference-Free Dialogue Evaluation Systems. In Proceedings of the 31st International Conference on Computational Linguistics, pages 4958–4972, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Measuring the Robustness of Reference-Free Dialogue Evaluation Systems (Vasselli et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.331.pdf