2025
pdf
bib
abs
Can LLMs Truly Plan? A Comprehensive Evaluation of Planning Capabilities
Gayeon Jung
|
HyeonSeok Lim
|
Minjun Kim
|
Joon-ho Lim
|
KyungTae Lim
|
Hansaem Kim
Findings of the Association for Computational Linguistics: EMNLP 2025
The existing assessments of planning capabilities of large language models (LLMs) remain largely limited to single-language or specific representation formats. To address this gap, we introduce the Multi-Plan benchmark comprising 204 multilingual and multi-format travel planning scenarios. In experimental results obtained with state-of-the-art LLMs, the Multi-Plan benchmark effectively highlights the performance disparities among models, notably showing superior results for reasoning-specialized models. Interestingly, language differences exhibited minimal impact, whereas mathematically structured representations significantly improved planning accuracy for most models, underscoring the crucial role of the input format. These findings enhance our understanding of planning abilities of LLMs, offer valuable insights for future research, and emphasize the need for more sophisticated AI evaluation methods. This dataset is publicly available at http://huggingface.co/datasets/Bllossom/Multi-Plan.
2024
pdf
bib
KULTURE Bench: A Benchmark for Assessing Language Model in Korean Cultural Context
Xiaonan Wang
|
Jinyoung Yeo
|
Joon-Ho Lim
|
Hansaem Kim
Proceedings of the 38th Pacific Asia Conference on Language, Information and Computation
2019
pdf
bib
abs
QE BERT: Bilingual BERT Using Multi-task Learning for Neural Quality Estimation
Hyun Kim
|
Joon-Ho Lim
|
Hyun-Ki Kim
|
Seung-Hoon Na
Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
For translation quality estimation at word and sentence levels, this paper presents a novel approach based on BERT that recently has achieved impressive results on various natural language processing tasks. Our proposed model is re-purposed BERT for the translation quality estimation and uses multi-task learning for the sentence-level task and word-level subtasks (i.e., source word, target word, and target gap). Experimental results on Quality Estimation shared task of WMT19 show that our systems show competitive results and provide significant improvements over the baseline.
2004
pdf
bib
Semantic Role Labeling using Maximum Entropy Model
Joon-Ho Lim
|
Young-Sook Hwang
|
So-Young Park
|
Hae-Chang Rim
Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004