The use of rating and Likert scales in Natural Language Generation human evaluation tasks: A review and some recommendations

Jacopo Amidei, Paul Piwek, Alistair Willis


Abstract
Rating and Likert scales are widely used in evaluation experiments to measure the quality of Natural Language Generation (NLG) systems. We review the use of rating and Likert scales for NLG evaluation tasks published in NLG specialized conferences over the last ten years (135 papers in total). Our analysis brings to light a number of deviations from good practice in their use. We conclude with some recommendations about the use of such scales. Our aim is to encourage the appropriate use of evaluation methodologies in the NLG community.
Anthology ID:
W19-8648
Volume:
Proceedings of the 12th International Conference on Natural Language Generation
Month:
October–November
Year:
2019
Address:
Tokyo, Japan
Editors:
Kees van Deemter, Chenghua Lin, Hiroya Takamura
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
397–402
Language:
URL:
https://aclanthology.org/W19-8648
DOI:
10.18653/v1/W19-8648
Bibkey:
Cite (ACL):
Jacopo Amidei, Paul Piwek, and Alistair Willis. 2019. The use of rating and Likert scales in Natural Language Generation human evaluation tasks: A review and some recommendations. In Proceedings of the 12th International Conference on Natural Language Generation, pages 397–402, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
The use of rating and Likert scales in Natural Language Generation human evaluation tasks: A review and some recommendations (Amidei et al., INLG 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-8648.pdf