SUMIE: A Synthetic Benchmark for Incremental Entity Summarization

Eunjeong Hwang, Yichao Zhou, Beliz Gunel, James Bradley Wendt, Sandeep Tata


Abstract
No existing dataset adequately tests how well language models can incrementally update entity summaries – a crucial ability as these models rapidly advance. The Incremental Entity Summarization (IES) task is vital for maintaining accurate, up-to-date knowledge. To address this, we introduce , a fully synthetic dataset designed to expose real-world IES challenges. This dataset addresses issues like incorrect entity association and incomplete information, capturing real-world complexity by generating diverse attributes, summaries, and unstructured paragraphs with 99% alignment accuracy between generated summaries and paragraphs. Extensive experiments demonstrate the dataset’s difficulty – state-of-the-art LLMs struggle to update summaries with an F1 higher than 80.4%. We will open-source the benchmark and the evaluation metrics to help the community make progress on IES tasks.
Anthology ID:
2025.coling-main.721
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10839–10864
Language:
URL:
https://aclanthology.org/2025.coling-main.721/
DOI:
Bibkey:
Cite (ACL):
Eunjeong Hwang, Yichao Zhou, Beliz Gunel, James Bradley Wendt, and Sandeep Tata. 2025. SUMIE: A Synthetic Benchmark for Incremental Entity Summarization. In Proceedings of the 31st International Conference on Computational Linguistics, pages 10839–10864, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
SUMIE: A Synthetic Benchmark for Incremental Entity Summarization (Hwang et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.721.pdf