Correcting Texts Generated by Transformers using Discourse Features and Web Mining

Alexander Chernyavskiy, Dmitry Ilvovsky, Boris Galitsky


Abstract
Recent transformer-based approaches to NLG like GPT-2 can generate syntactically coherent original texts. However, these generated texts have serious flaws: global discourse incoherence and meaninglessness of sentences in terms of entity values. We address both of these flaws: they are independent but can be combined to generate original texts that will be both consistent and truthful. This paper presents an approach to estimate the quality of discourse structure. Empirical results confirm that the discourse structure of currently generated texts is inaccurate. We propose the research directions to correct it using discourse features during the fine-tuning procedure. The suggested approach is universal and can be applied to different languages. Apart from that, we suggest a method to correct wrong entity values based on Web Mining and text alignment.
Anthology ID:
2021.ranlp-srw.6
Volume:
Proceedings of the Student Research Workshop Associated with RANLP 2021
Month:
September
Year:
2021
Address:
Online
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
36–43
Language:
URL:
https://aclanthology.org/2021.ranlp-srw.6
DOI:
Bibkey:
Cite (ACL):
Alexander Chernyavskiy, Dmitry Ilvovsky, and Boris Galitsky. 2021. Correcting Texts Generated by Transformers using Discourse Features and Web Mining. In Proceedings of the Student Research Workshop Associated with RANLP 2021, pages 36–43, Online. INCOMA Ltd..
Cite (Informal):
Correcting Texts Generated by Transformers using Discourse Features and Web Mining (Chernyavskiy et al., RANLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.ranlp-srw.6.pdf