Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models

Souvik Das, Lifeng Jin, Linfeng Song, Haitao Mi, Baolin Peng, Dong Yu


Abstract
Large language models (LLMs) exhibit impressive natural language capabilities but suffer from hallucination – generating content ungrounded in the realities of training data. Recent work has focused on decoding techniques to improve factuality in decoding by leveraging LLMs’ hierarchical representation of factual knowledge, manipulating the predicted distributions at inference time. Current state-of-the-art approaches refine decoding by contrasting logits from a lower layer with the final layer to exploit information related factuality within the model forward procedure. However, such methods often assume the final layer is most reliable one and the lower layer selection process depends on it. In this work, we first propose logit extrapolation of critical token probabilities beyond the last layer for more accurate contrasting. We additionally employ layer-wise entropy-guided lower layer selection, decoupling the selection process from the final layer. Experiments demonstrate strong performance - surpassing state-of-the-art on multiple different datasets by large margins. Analyses show different kinds of prompts respond to different selection strategies.
Anthology ID:
2025.coling-main.439
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6589–6600
Language:
URL:
https://aclanthology.org/2025.coling-main.439/
DOI:
Bibkey:
Cite (ACL):
Souvik Das, Lifeng Jin, Linfeng Song, Haitao Mi, Baolin Peng, and Dong Yu. 2025. Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 6589–6600, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models (Das et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.439.pdf