NarrativeXL: a Large-scale Dataset for Long-Term Memory Models

Arsenii Moskvichev, Ky-Vinh Mai


Abstract
We propose a new large-scale (nearly a million questions) ultra-long-context (more than 50,000 words average document length) reading comprehension dataset. Using GPT 3.5, we summarized each scene in 1,500 hand-curated fiction books from Project Gutenberg, which resulted in approximately 150 scene-level summaries per book. After that, we created a number of reading comprehension questions based on these summaries, including three types of multiple-choice scene recognition questions, as well as free-form narrative reconstruction questions. With 990,595 total questions, our dataset is an order of magnitude larger than the closest alternatives. Crucially, most questions have a known “retention demand”, indicating how long-term of a memory is needed to answer them, which should aid long-term memory performance evaluation. We validate our data in four small-scale experiments: one with human labelers, and three with existing language models. We show that our questions 1) adequately represent the source material 2) can be used to diagnose a model’s memory capacity 3) are not trivial for modern language models even when the memory demand does not exceed those models’ context lengths. Lastly, we provide our code which can be used to further expand the dataset with minimal human labor.
Anthology ID:
2023.findings-emnlp.1005
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15058–15072
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.1005
DOI:
10.18653/v1/2023.findings-emnlp.1005
Bibkey:
Cite (ACL):
Arsenii Moskvichev and Ky-Vinh Mai. 2023. NarrativeXL: a Large-scale Dataset for Long-Term Memory Models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15058–15072, Singapore. Association for Computational Linguistics.
Cite (Informal):
NarrativeXL: a Large-scale Dataset for Long-Term Memory Models (Moskvichev & Mai, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.1005.pdf