What can Neural Referential Form Selectors Learn?

Guanyi Chen, Fahime Same, Kees van Deemter


Abstract
Despite achieving encouraging results, neural Referring Expression Generation models are often thought to lack transparency. We probed neural Referential Form Selection (RFS) models to find out to what extent the linguistic features influencing the RE form are learned and captured by state-of-the-art RFS models. The results of 8 probing tasks show that all the defined features were learned to some extent. The probing tasks pertaining to referential status and syntactic position exhibited the highest performance. The lowest performance was achieved by the probing models designed to predict discourse structure properties beyond the sentence level.
Anthology ID:
2021.inlg-1.15
Volume:
Proceedings of the 14th International Conference on Natural Language Generation
Month:
August
Year:
2021
Address:
Aberdeen, Scotland, UK
Editors:
Anya Belz, Angela Fan, Ehud Reiter, Yaji Sripada
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
154–166
Language:
URL:
https://aclanthology.org/2021.inlg-1.15
DOI:
10.18653/v1/2021.inlg-1.15
Bibkey:
Cite (ACL):
Guanyi Chen, Fahime Same, and Kees van Deemter. 2021. What can Neural Referential Form Selectors Learn?. In Proceedings of the 14th International Conference on Natural Language Generation, pages 154–166, Aberdeen, Scotland, UK. Association for Computational Linguistics.
Cite (Informal):
What can Neural Referential Form Selectors Learn? (Chen et al., INLG 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.inlg-1.15.pdf
Supplementary attachment:
 2021.inlg-1.15.Supplementary_Attachment.zip
Data
WebNLG