Where do LLMs Encode the Knowledge to Assess the Ambiguity?

Hancheol Park, Geonmin Kim


Abstract
Recently, large language models (LLMs) have shown remarkable performance across various natural language processing tasks, thanks to their vast amount of knowledge. Nevertheless, they often generate unreliable responses. A common example is providing a single biased answer to an ambiguous question that could have multiple correct answers. To address this issue, in this study, we discuss methods to detect such ambiguous samples. More specifically, we propose a classifier that uses a representation from an intermediate layer of the LLM as input. This is based on observations from previous research that representations of ambiguous samples in intermediate layers are closer to those of relevant label samples in the embedding space, but not necessarily in higher layers. The experimental results demonstrate that using representations from intermediate layers detects ambiguous input prompts more effectively than using representations from the final layer. Furthermore, in this study, we propose a method to train such classifiers without ambiguity labels, as most datasets lack labels regarding the ambiguity of samples, and evaluate its effectiveness.
Anthology ID:
2025.coling-industry.38
Volume:
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert, Kareem Darwish, Apoorv Agarwal
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
445–452
Language:
URL:
https://aclanthology.org/2025.coling-industry.38/
DOI:
Bibkey:
Cite (ACL):
Hancheol Park and Geonmin Kim. 2025. Where do LLMs Encode the Knowledge to Assess the Ambiguity?. In Proceedings of the 31st International Conference on Computational Linguistics: Industry Track, pages 445–452, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Where do LLMs Encode the Knowledge to Assess the Ambiguity? (Park & Kim, COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-industry.38.pdf