Exploring Story Generation with Multi-task Objectives in Variational Autoencoders

Zhuohan Xie, Jey Han Lau, Trevor Cohn


Abstract
GPT-2 has been frequently adapted in story generation models as it provides powerful generative capability. However, it still fails to generate consistent stories and lacks diversity. Current story generation models leverage additional information such as plots or commonsense into GPT-2 to guide the generation process. These approaches focus on improving generation quality of stories while our work look at both quality and diversity. We explore combining BERT and GPT-2 to build a variational autoencoder (VAE), and extend it by adding additional objectives to learn global features such as story topic and discourse relations. Our evaluations show our enhanced VAE can provide better quality and diversity trade off, generate less repetitive story content and learn a more informative latent variable.
Anthology ID:
2021.alta-1.10
Volume:
Proceedings of the 19th Annual Workshop of the Australasian Language Technology Association
Month:
December
Year:
2021
Address:
Online
Editors:
Afshin Rahimi, William Lane, Guido Zuccon
Venue:
ALTA
SIG:
Publisher:
Australasian Language Technology Association
Note:
Pages:
97–106
Language:
URL:
https://aclanthology.org/2021.alta-1.10
DOI:
Bibkey:
Cite (ACL):
Zhuohan Xie, Jey Han Lau, and Trevor Cohn. 2021. Exploring Story Generation with Multi-task Objectives in Variational Autoencoders. In Proceedings of the 19th Annual Workshop of the Australasian Language Technology Association, pages 97–106, Online. Australasian Language Technology Association.
Cite (Informal):
Exploring Story Generation with Multi-task Objectives in Variational Autoencoders (Xie et al., ALTA 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.alta-1.10.pdf
Data
WritingPrompts