Can LLMs Truly Plan? A Comprehensive Evaluation of Planning Capabilities

Gayeon Jung, HyeonSeok Lim, Minjun Kim, Joon-ho Lim, KyungTae Lim, Hansaem Kim


Abstract
The existing assessments of planning capabilities of large language models (LLMs) remain largely limited to single-language or specific representation formats. To address this gap, we introduce the Multi-Plan benchmark comprising 204 multilingual and multi-format travel planning scenarios. In experimental results obtained with state-of-the-art LLMs, the Multi-Plan benchmark effectively highlights the performance disparities among models, notably showing superior results for reasoning-specialized models. Interestingly, language differences exhibited minimal impact, whereas mathematically structured representations significantly improved planning accuracy for most models, underscoring the crucial role of the input format. These findings enhance our understanding of planning abilities of LLMs, offer valuable insights for future research, and emphasize the need for more sophisticated AI evaluation methods. This dataset is publicly available at http://huggingface.co/datasets/Bllossom/Multi-Plan.
Anthology ID:
2025.findings-emnlp.702
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13069–13084
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.702/
DOI:
Bibkey:
Cite (ACL):
Gayeon Jung, HyeonSeok Lim, Minjun Kim, Joon-ho Lim, KyungTae Lim, and Hansaem Kim. 2025. Can LLMs Truly Plan? A Comprehensive Evaluation of Planning Capabilities. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 13069–13084, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Can LLMs Truly Plan? A Comprehensive Evaluation of Planning Capabilities (Jung et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.702.pdf
Checklist:
 2025.findings-emnlp.702.checklist.pdf