Permitted Knowledge Boundary: Evaluating the Knowledge-Constrained Responsiveness of Large Language Models

Wenrui Bao, Kai Wang, Siqiang Luo, Xiang Li


Abstract
With the advancement of large language models (LLMs), recent research has raised concerns about their controllability.. In this paper, we argue for the importance of Knowledge-Constrained Responsiveness (KCR), ensuring that LLMs comply with human-defined constraints. However, KCR is an implicit and unobservable capability of LLMs, functioning as a black box that currently eludes quantitative assessment. To address this issue, we first introduce the definition of “permitted boundary” and define the “boundary bias” to depict KCR. We propose six metrics to quantify the boundary bias of LLMs and subsequently assess the KCR. Furthermore, we establish a benchmark with two new datasets, KCR-SimpleQA and KCR-WebNLG, to evaluate the performance of LLMs. Our extensive experiments show that several tested LLMs still struggle to varying degrees when adhering to constraints, especially without the corresponding knowledge.
Anthology ID:
2025.findings-emnlp.722
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13390–13405
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.722/
DOI:
Bibkey:
Cite (ACL):
Wenrui Bao, Kai Wang, Siqiang Luo, and Xiang Li. 2025. Permitted Knowledge Boundary: Evaluating the Knowledge-Constrained Responsiveness of Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 13390–13405, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Permitted Knowledge Boundary: Evaluating the Knowledge-Constrained Responsiveness of Large Language Models (Bao et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.722.pdf
Checklist:
 2025.findings-emnlp.722.checklist.pdf