Quality Signals in Generated Stories

Manasvi Sagarkar, John Wieting, Lifu Tu, Kevin Gimpel


Abstract
We study the problem of measuring the quality of automatically-generated stories. We focus on the setting in which a few sentences of a story are provided and the task is to generate the next sentence (“continuation”) in the story. We seek to identify what makes a story continuation interesting, relevant, and have high overall quality. We crowdsource annotations along these three criteria for the outputs of story continuation systems, design features, and train models to predict the annotations. Our trained scorer can be used as a rich feature function for story generation, a reward function for systems that use reinforcement learning to learn to generate stories, and as a partial evaluation metric for story generation.
Anthology ID:
S18-2024
Volume:
Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Malvina Nissim, Jonathan Berant, Alessandro Lenci
Venue:
*SEM
SIGs:
SIGLEX | SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
192–202
Language:
URL:
https://aclanthology.org/S18-2024
DOI:
10.18653/v1/S18-2024
Bibkey:
Cite (ACL):
Manasvi Sagarkar, John Wieting, Lifu Tu, and Kevin Gimpel. 2018. Quality Signals in Generated Stories. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 192–202, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Quality Signals in Generated Stories (Sagarkar et al., *SEM 2018)
Copy Citation:
PDF:
https://aclanthology.org/S18-2024.pdf