Best Practices for Data-Efficient Modeling in NLG:How to Train Production-Ready Neural Models with Less Data

Ankit Arun, Soumya Batra, Vikas Bhardwaj, Ashwini Challa, Pinar Donmez, Peyman Heidari, Hakan Inan, Shashank Jain, Anuj Kumar, Shawn Mei, Karthik Mohan, Michael White


Abstract
Natural language generation (NLG) is a critical component in conversational systems, owing to its role of formulating a correct and natural text response. Traditionally, NLG components have been deployed using template-based solutions. Although neural network solutions recently developed in the research community have been shown to provide several benefits, deployment of such model-based solutions has been challenging due to high latency, correctness issues, and high data needs. In this paper, we present approaches that have helped us deploy data-efficient neural solutions for NLG in conversational systems to production. We describe a family of sampling and modeling techniques to attain production quality with light-weight neural network models using only a fraction of the data that would be necessary otherwise, and show a thorough comparison between each. Our results show that domain complexity dictates the appropriate approach to achieve high data efficiency. Finally, we distill the lessons from our experimental findings into a list of best practices for production-level NLG model development, and present them in a brief runbook. Importantly, the end products of all of the techniques are small sequence-to-sequence models (~2Mb) that we can reliably deploy in production. These models achieve the same quality as large pretrained models (~1Gb) as judged by human raters.
Anthology ID:
2020.coling-industry.7
Volume:
Proceedings of the 28th International Conference on Computational Linguistics: Industry Track
Month:
December
Year:
2020
Address:
Online
Editors:
Ann Clifton, Courtney Napoles
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
64–77
Language:
URL:
https://aclanthology.org/2020.coling-industry.7
DOI:
10.18653/v1/2020.coling-industry.7
Bibkey:
Cite (ACL):
Ankit Arun, Soumya Batra, Vikas Bhardwaj, Ashwini Challa, Pinar Donmez, Peyman Heidari, Hakan Inan, Shashank Jain, Anuj Kumar, Shawn Mei, Karthik Mohan, and Michael White. 2020. Best Practices for Data-Efficient Modeling in NLG:How to Train Production-Ready Neural Models with Less Data. In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track, pages 64–77, Online. International Committee on Computational Linguistics.
Cite (Informal):
Best Practices for Data-Efficient Modeling in NLG:How to Train Production-Ready Neural Models with Less Data (Arun et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-industry.7.pdf
Data
E2E