Derek Jiang
2024
Are Large Language Models Capable of Generating Human-Level Narratives?
Yufei Tian
|
Tenghao Huang
|
Miri Liu
|
Derek Jiang
|
Alexander Spangher
|
Muhao Chen
|
Jonathan May
|
Nanyun Peng
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
As daily reliance on large language models (LLMs) grows, assessing their generation quality is crucial to understanding how they might impact on our communications. This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression. We introduce a novel computational framework to analyze narratives through three discourse-level aspects: i) story arcs, ii) turning points, and iii) affective dimensions, including arousal and valence. By leveraging expert and automatic annotations, we uncover significant discrepancies between the LLM- and human- written stories. While human-written stories are suspenseful, arousing, and diverse in narrative structures, LLM stories are homogeneously positive and lack tension. Next, we measure narrative reasoning skills as a precursor to generative capacities, concluding that most LLMs fall short of human abilities in discourse understanding. Finally, we show that explicit integration of aforementioned discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling in terms of diversity, suspense, and arousal. Such advances promise to facilitate greater and more natural roles LLMs in human communication.
Search
Co-authors
- Yufei Tian 1
- Tenghao Huang 1
- Miri Liu 1
- Alexander Spangher 1
- Muhao Chen 1
- show all...