Reproducibility of Exploring Neural Text Simplification Models: A Review

Mohammad Arvan, Luís Pina, Natalie Parde


Abstract
The reproducibility of NLP research has drawn increased attention over the last few years. Several tools, guidelines, and metrics have been introduced to address concerns in regard to this problem; however, much work still remains to ensure widespread adoption of effective reproducibility standards. In this work, we review the reproducibility of Exploring Neural Text Simplification Models by Nisioi et al. (2017), evaluating it from three main aspects: data, software artifacts, and automatic evaluations. We discuss the challenges and issues we faced during this process. Furthermore, we explore the adequacy of current reproducibility standards. Our code, trained models, and a docker container of the environment used for training and evaluation are made publicly available.
Anthology ID:
2022.inlg-genchal.10
Volume:
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
Month:
July
Year:
2022
Address:
Waterville, Maine, USA and virtual meeting
Editors:
Samira Shaikh, Thiago Ferreira, Amanda Stent
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
62–70
Language:
URL:
https://aclanthology.org/2022.inlg-genchal.10
DOI:
Bibkey:
Cite (ACL):
Mohammad Arvan, Luís Pina, and Natalie Parde. 2022. Reproducibility of Exploring Neural Text Simplification Models: A Review. In Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges, pages 62–70, Waterville, Maine, USA and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Reproducibility of Exploring Neural Text Simplification Models: A Review (Arvan et al., INLG 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.inlg-genchal.10.pdf