Factual Knowledge Assessment of Language Models Using Distractors

Hichem Ammar Khodja, Abderrahmane Ait gueni ssaid, Frederic Bechet, Quentin Brabant, Alexis Nasr, Gwénolé Lecorvé


Abstract
Language models encode extensive factual knowledge within their parameters. The accurate assessment of this knowledge is crucial for understanding and improving these models. In the literature, factual knowledge assessment often relies on cloze sentences, which can lead to erroneous conclusions due to the complexity of natural language (out-of-subject continuations, the existence of many correct answers and the several ways of expressing them). In this paper, we introduce a new interpretable knowledge assessment method that mitigates these issues by leveraging distractors—incorrect but plausible alternatives to the correct answer. We propose several strategies for retrieving distractors and determine the most effective one through experimentation. Our method is evaluated against existing approaches, demonstrating solid alignment with human judgment and stronger robustness to verbalization artifacts. The code and data to reproduce our experiments are available on GitHub.
Anthology ID:
2025.coling-main.537
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8043–8056
Language:
URL:
https://aclanthology.org/2025.coling-main.537/
DOI:
Bibkey:
Cite (ACL):
Hichem Ammar Khodja, Abderrahmane Ait gueni ssaid, Frederic Bechet, Quentin Brabant, Alexis Nasr, and Gwénolé Lecorvé. 2025. Factual Knowledge Assessment of Language Models Using Distractors. In Proceedings of the 31st International Conference on Computational Linguistics, pages 8043–8056, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Factual Knowledge Assessment of Language Models Using Distractors (Ammar Khodja et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.537.pdf