Shiunzu Kuo
2021
Building Adaptive Acceptability Classifiers for Neural NLG
Soumya Batra
|
Shashank Jain
|
Peyman Heidari
|
Ankit Arun
|
Catharine Youngs
|
Xintong Li
|
Pinar Donmez
|
Shawn Mei
|
Shiunzu Kuo
|
Vikas Bhardwaj
|
Anuj Kumar
|
Michael White
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered acceptable if it is both semantically correct and grammatical. We don’t make use of any human references making the classifiers suitable for runtime deployment. Training data for the classifiers is obtained using a 2-stage approach of first generating synthetic data using a combination of existing and new model-based approaches followed by a novel validation framework to filter and sort the synthetic data into acceptable and unacceptable classes. Our 2-stage approach adapts to a wide range of data representations and does not require additional data beyond what the NLG models are trained on. It is also independent of the underlying NLG model architecture, and is able to generate more realistic samples close to the distribution of the NLG model-generated responses. We present results on 5 datasets (WebNLG, Cleaned E2E, ViGGO, Alarm, and Weather) with varying data representations. We compare our framework with existing techniques that involve synthetic data generation using simple sentence transformations and/or model-based techniques, and show that building acceptability classifiers using data that resembles the generation model outputs followed by a validation framework outperforms the existing techniques, achieving state-of-the-art results. We also show that our techniques can be used in few-shot settings using self-training.
Search
Co-authors
- Soumya Batra 1
- Shashank Jain 1
- Peyman Heidari 1
- Ankit Arun 1
- Catharine Youngs 1
- show all...