%0 Conference Proceedings %T Assessing Discourse Relations in Language Generation from GPT-2 %A Ko, Wei-Jen %A Li, Junyi Jessy %Y Davis, Brian %Y Graham, Yvette %Y Kelleher, John %Y Sripada, Yaji %S Proceedings of the 13th International Conference on Natural Language Generation %D 2020 %8 December %I Association for Computational Linguistics %C Dublin, Ireland %F ko-li-2020-assessing %X Recent advances in NLP have been attributed to the emergence of large-scale pre-trained language models. GPT-2, in particular, is suited for generation tasks given its left-to-right language modeling objective, yet the linguistic quality of its generated text has largely remain unexplored. Our work takes a step in understanding GPT-2’s outputs in terms of discourse coherence. We perform a comprehensive study on the validity of explicit discourse relations in GPT-2’s outputs under both organic generation and fine-tuned scenarios. Results show GPT-2 does not always generate text containing valid discourse relations; nevertheless, its text is more aligned with human expectation in the fine-tuned scenario. We propose a decoupled strategy to mitigate these problems and highlight the importance of explicitly modeling discourse information. %R 10.18653/v1/2020.inlg-1.8 %U https://aclanthology.org/2020.inlg-1.8 %U https://doi.org/10.18653/v1/2020.inlg-1.8 %P 52-59