Self-Training for Compositional Neural NLG in Task-Oriented Dialogue

Xintong Li, Symon Stevens-Guille, Aleksandre Maskharashvili, Michael White


Abstract
Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs. To address this issue, we show that self-training enhanced with constrained decoding yields large gains in data efficiency on a conversational weather dataset that employs compositional meaning representations. In particular, our experiments indicate that self-training with constrained decoding can enable sequence-to-sequence models to achieve satisfactory quality using vanilla decoding with five to ten times less data than with ordinary supervised baseline; moreover, by leveraging pretrained models, data efficiency can be increased further to fifty times. We confirm the main automatic results with human evaluations and show that they extend to an enhanced, compositional version of the E2E dataset. The end result is an approach that makes it possible to achieve acceptable performance on compositional NLG tasks using hundreds rather than tens of thousands of training samples.
Anthology ID:
2021.inlg-1.10
Volume:
Proceedings of the 14th International Conference on Natural Language Generation
Month:
August
Year:
2021
Address:
Aberdeen, Scotland, UK
Editors:
Anya Belz, Angela Fan, Ehud Reiter, Yaji Sripada
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
87–102
Language:
URL:
https://aclanthology.org/2021.inlg-1.10
DOI:
10.18653/v1/2021.inlg-1.10
Bibkey:
Cite (ACL):
Xintong Li, Symon Stevens-Guille, Aleksandre Maskharashvili, and Michael White. 2021. Self-Training for Compositional Neural NLG in Task-Oriented Dialogue. In Proceedings of the 14th International Conference on Natural Language Generation, pages 87–102, Aberdeen, Scotland, UK. Association for Computational Linguistics.
Cite (Informal):
Self-Training for Compositional Neural NLG in Task-Oriented Dialogue (Li et al., INLG 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.inlg-1.10.pdf
Code
 znculee/treenlg-bart +  additional community code