The Authenticity Gap in Human Evaluation

Kawin Ethayarajh, Dan Jurafsky


Abstract
Human ratings are the gold standard in NLG evaluation. The standard protocol is to collect ratings of generated text, average across annotators, and rank NLG systems by their average scores. However, little consideration has been given as to whether this approach faithfully captures human preferences. Analyzing this standard protocol through the lens of utility theory in economics, we identify the implicit assumptions it makes about annotators. These assumptions are often violated in practice, in which case annotator ratings cease to reflect their preferences. The most egregious violations come from using Likert scales, which provably reverse the direction of the true preference in certain cases. We suggest improvements to the standard protocol to make it more theoretically sound, but even in its improved form, it cannot be used to evaluate open-ended tasks like story generation. For the latter, we propose a new human evaluation protocol called system-level probabilistic assessment (SPA). When human evaluation of stories is done with SPA, we can recover the ordering of GPT-3 models by size, with statistically significant results. However, when human evaluation is done with the standard protocol, less than half of the expected preferences can be recovered (e.g., there is no significant difference between curie and davinci, despite using a highly powered test).
Anthology ID:
2022.emnlp-main.406
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6056–6070
Language:
URL:
https://aclanthology.org/2022.emnlp-main.406
DOI:
10.18653/v1/2022.emnlp-main.406
Bibkey:
Cite (ACL):
Kawin Ethayarajh and Dan Jurafsky. 2022. The Authenticity Gap in Human Evaluation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6056–6070, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
The Authenticity Gap in Human Evaluation (Ethayarajh & Jurafsky, EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.406.pdf