Human Evaluation of Creative NLG Systems: An Interdisciplinary Survey on Recent Papers

Mika Hämäläinen, Khalid Alnajjar


Abstract
We survey human evaluation in papers presenting work on creative natural language generation that have been published in INLG 2020 and ICCC 2020. The most typical human evaluation method is a scaled survey, typically on a 5 point scale, while many other less common methods exist. The most commonly evaluated parameters are meaning, syntactic correctness, novelty, relevance and emotional value, among many others. Our guidelines for future evaluation include clearly defining the goal of the generative system, asking questions as concrete as possible, testing the evaluation setup, using multiple different evaluation setups, reporting the entire evaluation process and potential biases clearly, and finally analyzing the evaluation results in a more profound way than merely reporting the most typical statistics.
Anthology ID:
2021.gem-1.9
Volume:
Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Antoine Bosselut, Esin Durmus, Varun Prashant Gangal, Sebastian Gehrmann, Yacine Jernite, Laura Perez-Beltrachini, Samira Shaikh, Wei Xu
Venue:
GEM
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
84–95
Language:
URL:
https://aclanthology.org/2021.gem-1.9
DOI:
10.18653/v1/2021.gem-1.9
Bibkey:
Cite (ACL):
Mika Hämäläinen and Khalid Alnajjar. 2021. Human Evaluation of Creative NLG Systems: An Interdisciplinary Survey on Recent Papers. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 84–95, Online. Association for Computational Linguistics.
Cite (Informal):
Human Evaluation of Creative NLG Systems: An Interdisciplinary Survey on Recent Papers (Hämäläinen & Alnajjar, GEM 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.gem-1.9.pdf