Chain-of-Specificity: Enhancing Task-Specific Constraint Adherence in Large Language Models

Kaiwen Wei, Jiang Zhong, Hongzhi Zhang, Fuzheng Zhang, Di Zhang, Li Jin, Yue Yu, Jingyuan Zhang


Abstract
Large Language Models (LLMs) exhibit remarkable generative capabilities, enabling the generation of valuable information. Despite these advancements, previous research found that LLMs sometimes struggle with adhering to specific constraints, such as being in a specific place or at a specific time, and at times even overlook them, which leads to responses that are either too generic or not fully satisfactory. Existing approaches attempted to address this issue by decomposing and rewriting input instructions or reflecting on prior failings, yet they fall short in adequately emphasizing specific constraints and unlocking the underlying knowledge, such as programming within the context of software development. In response, this paper proposes a simple yet effective method called Chain-of-Specificity (CoS). Specifically, CoS emphasizes the specific constraints in the input instructions, unlocks knowledge within LLMs, and refines responses. Experiments conducted on publicly available and self-built complex datasets demonstrate that CoS outperforms existing methods in enhancing generated content, especially in terms of specificity. Additionally, as the number of specific constraints increases, other baselines falter, while CoS still performs well. Moreover, we show that distilling responses generated by CoS effectively enhances the ability of smaller models to follow constrained instructions.
Anthology ID:
2025.coling-main.164
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2401–2416
Language:
URL:
https://aclanthology.org/2025.coling-main.164/
DOI:
Bibkey:
Cite (ACL):
Kaiwen Wei, Jiang Zhong, Hongzhi Zhang, Fuzheng Zhang, Di Zhang, Li Jin, Yue Yu, and Jingyuan Zhang. 2025. Chain-of-Specificity: Enhancing Task-Specific Constraint Adherence in Large Language Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 2401–2416, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Chain-of-Specificity: Enhancing Task-Specific Constraint Adherence in Large Language Models (Wei et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.164.pdf