Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents

Eric Smith, Orion Hsu, Rebecca Qian, Stephen Roller, Y-Lan Boureau, Jason Weston


Abstract
At the heart of improving conversational AI is the open problem of how to evaluate conversations. Issues with automatic metrics are well known (Liu et al., 2016), with human evaluations still considered the gold standard. Unfortunately, how to perform human evaluations is also an open problem: differing data collection methods have varying levels of human agreement and statistical sensitivity, resulting in differing amounts of human annotation hours and labor costs. In this work we compare five different crowdworker-based human evaluation methods and find that different methods are best depending on the types of models compared, with no clear winner across the board. While this highlights the open problems in the area, our analysis leads to advice of when to use which one, and possible future directions.
Anthology ID:
2022.nlp4convai-1.8
Volume:
Proceedings of the 4th Workshop on NLP for Conversational AI
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Bing Liu, Alexandros Papangelis, Stefan Ultes, Abhinav Rastogi, Yun-Nung Chen, Georgios Spithourakis, Elnaz Nouri, Weiyan Shi
Venue:
NLP4ConvAI
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
77–97
Language:
URL:
https://aclanthology.org/2022.nlp4convai-1.8
DOI:
10.18653/v1/2022.nlp4convai-1.8
Bibkey:
Cite (ACL):
Eric Smith, Orion Hsu, Rebecca Qian, Stephen Roller, Y-Lan Boureau, and Jason Weston. 2022. Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents. In Proceedings of the 4th Workshop on NLP for Conversational AI, pages 77–97, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents (Smith et al., NLP4ConvAI 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.nlp4convai-1.8.pdf
Video:
 https://aclanthology.org/2022.nlp4convai-1.8.mp4