ULTRABENCH: Benchmarking LLMs under Extreme Fine-grained Text Generation

Longfei Yun, Letian Peng, Jingbo Shang


Abstract
Fine-grained control is essential for precise and customizable text generation, yet existing benchmarks evaluate models on only a few attributes, typically fewer than five. We introduce UltraBench, a new benchmark for extremely fine-grained controllable generation (EFCG), which evaluates large language models (LLMs) under dense, multi-attribute constraints. Each sample includes approximately 70 attributes, combining LLM-extracted soft constraints (e.g., style and tone) with programmatically enforced hard constraints (e.g., word count). Using UltraBench, we conduct a comprehensive evaluation of state-of-the-art LLMs and prompting strategies. Models such as GPT-4.1 and Qwen3-8B perform well on individual constraints, achieving instruction-level accuracy above 70%, but consistently fail to satisfy all constraints simultaneously. To understand this limitation, we analyze model behavior across different dimensions. First, we observe a clear position bias: models tend to prioritize constraints presented later in the prompt while neglecting those that appear earlier. Second, we find that structural and formatting-related constraints are significantly more difficult to satisfy than content- or style-based ones, suggesting that current models struggle to coordinate global structure with token-level control. Finally, our error analysis reveals distinct failure modes: GPT-4.1 often attempts to follow constraints but falls short in precision, whereas LLaMA frequently omits constraints, particularly in multi-turn settings. These findings highlight fundamental limitations in EFCG and underscore the need for new methods that support dense, instruction-sensitive generation.
Anthology ID:
2025.findings-emnlp.835
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15438–15453
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.835/
DOI:
Bibkey:
Cite (ACL):
Longfei Yun, Letian Peng, and Jingbo Shang. 2025. ULTRABENCH: Benchmarking LLMs under Extreme Fine-grained Text Generation. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 15438–15453, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
ULTRABENCH: Benchmarking LLMs under Extreme Fine-grained Text Generation (Yun et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.835.pdf
Checklist:
 2025.findings-emnlp.835.checklist.pdf