Model Criticism for Long-Form Text Generation

Yuntian Deng, Volodymyr Kuleshov, Alexander Rush


Abstract
Language models have demonstrated the ability to generate highly fluent text; however, it remains unclear whether their output retains coherent high-level structure (e.g., story progression). Here, we propose to apply a statistical tool, model criticism in latent space, to evaluate the high-level structure of the generated text. Model criticism compares the distributions between real and generated data in a latent space obtained according to an assumptive generative process. Different generative processes identify specific failure modes of the underlying model. We perform experiments on three representative aspects of high-level discourse—coherence, coreference, and topicality—and find that transformer-based language models are able to capture topical structures but have a harder time maintaining structural coherence or modeling coreference.
Anthology ID:
2022.emnlp-main.815
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11887–11912
Language:
URL:
https://aclanthology.org/2022.emnlp-main.815
DOI:
10.18653/v1/2022.emnlp-main.815
Bibkey:
Cite (ACL):
Yuntian Deng, Volodymyr Kuleshov, and Alexander Rush. 2022. Model Criticism for Long-Form Text Generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11887–11912, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Model Criticism for Long-Form Text Generation (Deng et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.815.pdf