Too Long, Didn’t Model: Decomposing LLM Long Context Understanding With Novels

Sil Hamilton, Rebecca Hicke, Mia Ferrante, Matthew Wilkens, David Mimno


Abstract
Although the context length of large language models (LLMs) has increased to millions of tokens, evaluating their effectiveness beyond needle-in-a-haystack approaches has proven difficult. We argue that novels provide a case study of subtle, complicated structure and long-range semantic dependencies often over 128k tokens in length. Existing novel-based long-context benchmarks are limited in scale due to the cost of manual annotating long texts. Inspired by work on computational novel analysis, we release the Too Long, Didn’t Model (TLDM) benchmark, which tests a model’s ability to reliably report plot summary, storyworld configuration, and elapsed narrative time. We find that none of seven tested frontier LLMs retain stable understanding beyond 64k tokens. Our results suggest language model developers must look beyond "lost in the middle” benchmarks when evaluating model performance in complex long context scenarios. To aid in further development we release the TLDM benchmark together with reference code and data.
Anthology ID:
2026.latechclfl-1.28
Volume:
Proceedings of the 10th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Diego Alves, Yuri Bizzoni, Stefania Degaetano-Ortlieb, Anna Kazantseva, Janis Pagel, Stan Szpakowicz
Venues:
LaTeCH-CLfL | WS
SIG:
SIGHUM
Publisher:
Association for Computational Linguistics
Note:
Pages:
295–304
Language:
URL:
https://aclanthology.org/2026.latechclfl-1.28/
DOI:
Bibkey:
Cite (ACL):
Sil Hamilton, Rebecca Hicke, Mia Ferrante, Matthew Wilkens, and David Mimno. 2026. Too Long, Didn’t Model: Decomposing LLM Long Context Understanding With Novels. In Proceedings of the 10th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature 2026, pages 295–304, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Too Long, Didn’t Model: Decomposing LLM Long Context Understanding With Novels (Hamilton et al., LaTeCH-CLfL 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.latechclfl-1.28.pdf
Supplementarymaterial:
 2026.latechclfl-1.28.SupplementaryMaterial.zip