LORE: Continual Logit Rewriting Fosters Faithful Generation

Charles Yu, Qingyun Wang, Yuting Hu, Jinjun Xiong, Heng Ji


Abstract
As autonomous agents and assistants, large language models (LLMs) often struggle with “hallucinations.” Fundamentally, the problem is one of prioritization and balance: the LLM needs to understand or infer when it needs to be creative and balance that with its need to be accurate. Most efforts focus on either updating intrinsic knowledge via targeted post-training or by adding external knowledge sources which the LLM can reference neurosymbolically (e.g., via retrieval-augmented generation). However, these all eventually rely on the LLM’s implicit reasoning ability during generation, still allowing for these random hallucinations despite high-quality training examples and references. Using aspect-oriented summarization as a case study, we propose **LOgit REwriting**(**LORE**), a new controlled generation paradigm which can simultaneously be faithful to external knowledge and to the LLM’s intentions. LORE works by adding a rewriting module at left-to-right inference time, continuously reflecting on the newest prediction and trying to find a replacement that is more faithful to the source document. Then, it merges the logits of the replacement with those of the original prediction to generate the next token. We created a new long-context aspect-oriented summarization dataset, **SLPAspect**, and find that LORE generates 5.8% better summaries compared to the LLM without LORE-rewriting. All code and data from this paper will be available on GitHub after the anonymity period.
Anthology ID:
2025.findings-emnlp.1163
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21314–21328
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1163/
DOI:
Bibkey:
Cite (ACL):
Charles Yu, Qingyun Wang, Yuting Hu, Jinjun Xiong, and Heng Ji. 2025. LORE: Continual Logit Rewriting Fosters Faithful Generation. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 21314–21328, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
LORE: Continual Logit Rewriting Fosters Faithful Generation (Yu et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1163.pdf
Checklist:
 2025.findings-emnlp.1163.checklist.pdf