Coherent Long Text Generation by Contrastive Soft Prompt

Guandan Chen, Jiashu Pu, Yadong Xi, Rongsheng Zhang


Abstract
Improving the coherence of long text generation is an important but challenging task. Existing models still struggle to generate a logical and coherent sentence sequence. It is difficult for a model to plan long text generation and avoid generating incoherent texts from a high-level semantic perspective. We speculate that this is due to two factors: (1) current training methods mainly rely on maximum likelihood estimation computed from token-level probability prediction; (2) the role of incoherent texts has been largely under-explored, thus the noised generated texts with errors are out-of-distribution for the model. To address these issues, in this paper, we propose a Contrastive Soft Prompt (CSP) model for improving the coherence of long text generation. It learns text representations in the hidden space for better planning long text generation. To this end, it jointly learns to generate a text representation close to representations of coherent texts and away from incoherent ones, and then generate long text taking this representation as the soft prompt. We conduct experiments on two public story generation datasets, and experiment results show that our method can generate more coherent stories than the state-of-the-art model.
Anthology ID:
2022.gem-1.42
Volume:
Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Antoine Bosselut, Khyathi Chandu, Kaustubh Dhole, Varun Gangal, Sebastian Gehrmann, Yacine Jernite, Jekaterina Novikova, Laura Perez-Beltrachini
Venue:
GEM
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
445–455
Language:
URL:
https://aclanthology.org/2022.gem-1.42
DOI:
10.18653/v1/2022.gem-1.42
Bibkey:
Cite (ACL):
Guandan Chen, Jiashu Pu, Yadong Xi, and Rongsheng Zhang. 2022. Coherent Long Text Generation by Contrastive Soft Prompt. In Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 445–455, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Coherent Long Text Generation by Contrastive Soft Prompt (Chen et al., GEM 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.gem-1.42.pdf
Video:
 https://aclanthology.org/2022.gem-1.42.mp4