ReproGen: Proposal for a Shared Task on Reproducibility of Human Evaluations in NLG

Anya Belz, Shubham Agarwal, Anastasia Shimorina, Ehud Reiter


Abstract
Across NLP, a growing body of work is looking at the issue of reproducibility. However, replicability of human evaluation experiments and reproducibility of their results is currently under-addressed, and this is of particular concern for NLG where human evaluations are the norm. This paper outlines our ideas for a shared task on reproducibility of human evaluations in NLG which aims (i) to shed light on the extent to which past NLG evaluations are replicable and reproducible, and (ii) to draw conclusions regarding how evaluations can be designed and reported to increase replicability and reproducibility. If the task is run over several years, we hope to be able to document an overall increase in levels of replicability and reproducibility over time.
Anthology ID:
2020.inlg-1.29
Volume:
Proceedings of the 13th International Conference on Natural Language Generation
Month:
December
Year:
2020
Address:
Dublin, Ireland
Editors:
Brian Davis, Yvette Graham, John Kelleher, Yaji Sripada
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
232–236
Language:
URL:
https://aclanthology.org/2020.inlg-1.29
DOI:
10.18653/v1/2020.inlg-1.29
Bibkey:
Cite (ACL):
Anya Belz, Shubham Agarwal, Anastasia Shimorina, and Ehud Reiter. 2020. ReproGen: Proposal for a Shared Task on Reproducibility of Human Evaluations in NLG. In Proceedings of the 13th International Conference on Natural Language Generation, pages 232–236, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
ReproGen: Proposal for a Shared Task on Reproducibility of Human Evaluations in NLG (Belz et al., INLG 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.inlg-1.29.pdf