Recognizing Limits: Investigating Infeasibility in Large Language Models

Wenbo Zhang, Zihang Xu, Hengrui Cai


Abstract
Large language models (LLMs) have shown remarkable performance in various tasks but often fail to handle queries that exceed their knowledge and capabilities, leading to incorrect or fabricated responses. This paper addresses the need for LLMs to recognize and refuse infeasible tasks due to the requests surpassing their capabilities. We conceptualize four main categories of infeasible tasks for LLMs, which cover a broad spectrum of hallucination-related challenges identified in prior literature. We develop and benchmark a new dataset comprising diverse infeasible and feasible tasks to evaluate multiple LLMs’ abilities to decline infeasible tasks. Furthermore, we explore the potential of increasing LLMs’ refusal capabilities with fine-tuning. Experiments validate the effectiveness of our trained models, offering promising directions for refining the operational boundaries of LLMs in real applications.
Anthology ID:
2025.findings-emnlp.535
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10092–10112
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.535/
DOI:
Bibkey:
Cite (ACL):
Wenbo Zhang, Zihang Xu, and Hengrui Cai. 2025. Recognizing Limits: Investigating Infeasibility in Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 10092–10112, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Recognizing Limits: Investigating Infeasibility in Large Language Models (Zhang et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.535.pdf
Checklist:
 2025.findings-emnlp.535.checklist.pdf