Low-Hallucination and Efficient Coreference Resolution with LLMs

Yujian Gan, Yuan Liang, Jinxia Xie, Yanni Lin, Juntao Yu, Massimo Poesio


Abstract
Large Language Models (LLMs) have shown promising results in coreference resolution, especially after fine-tuning. However, recent generative approaches face a critical issue: hallucinations—where the model generates content not present in the original input. These hallucinations make evaluation difficult and decrease overall performance. To address this issue, we analyze the underlying causes of hallucinations and propose a low-hallucination and efficient solution. Specifically, we introduce Efficient Constrained Decoding for Coreference Resolution, which maintains strong robustness while significantly improving computational efficiency. On the English OntoNotes development set, our approach achieved slightly better performance than previous state-of-the-art methods, while requiring substantially fewer parameters.
Anthology ID:
2025.findings-emnlp.934
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17243–17256
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.934/
DOI:
Bibkey:
Cite (ACL):
Yujian Gan, Yuan Liang, Jinxia Xie, Yanni Lin, Juntao Yu, and Massimo Poesio. 2025. Low-Hallucination and Efficient Coreference Resolution with LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 17243–17256, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Low-Hallucination and Efficient Coreference Resolution with LLMs (Gan et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.934.pdf
Checklist:
 2025.findings-emnlp.934.checklist.pdf