Hangyeol Yoo
Also published as: HanGyeol Yoo
2026
TReX: Tokenizer Regression for Optimal Data Mixture
Inho Won | Hangyeol Yoo | Minkyung Cho | Jungyeul Park | Hoyun Song | KyungTae Lim
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Inho Won | Hangyeol Yoo | Minkyung Cho | Jungyeul Park | Hoyun Song | KyungTae Lim
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Building effective tokenizers for multilingual Large Language Models (LLMs) requires careful control over language-specific data mixtures. While a tokenizer’s compression performance critically affects the efficiency of LLM training and inference, existing approaches rely on heuristics or costly large-scale searches to determine optimal language ratios. We introduce Tokenizer Regression for Optimal Data MiXture (TReX), a regression-based framework that efficiently predicts the optimal data mixture for tokenizer training. TReX trains small-scale proxy tokenizers on random mixtures, gathers their compression statistics, and learns to predict compression performance from data mixtures. This learned model enables scalable mixture search before large-scale tokenizer training, mitigating the accuracy-cost trade-off in multilingual tokenizer design. Tokenizers trained with TReX’s predicted mixtures outperform mixtures based on LLaMA3 and uniform distributions by up to 12% in both in- and out-of-distribution compression efficiency, demonstrating strong scalability, robustness, and practical effectiveness.
ELO: Efficient Layer-Specific Optimization for Continual Pretraining of Multilingual LLMs
Hangyeol Yoo | ChangSu Choi | Minjun Kim | Seohyun Song | SeungWoo Song | Inho Won | Jongyoul Park | Cheoneum Park | KyungTae Lim
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
Hangyeol Yoo | ChangSu Choi | Minjun Kim | Seohyun Song | SeungWoo Song | Inho Won | Jongyoul Park | Cheoneum Park | KyungTae Lim
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
We propose an efficient layer-specific optimization (ELO) method designed to enhance continual pretraining (CP) for specific languages in multilingual large language models (MLLMs). This approach addresses the common challenges of high computational cost and degradation of source language performance associated with traditional CP. The ELO method consists of two main stages: (1) ELO Pretraining, where a small subset of specific layers, identified in our experiments as the critically important first and last layers, are detached from the original MLLM and trained with the target language. This significantly reduces not only the number of trainable parameters but also the total parameters computed during the forward pass, minimizing GPU memory consumption and accelerating the training process. (2) Layer Alignment, where the newly trained layers are reintegrated into the original model, followed by a brief full fine-tuning step on a small dataset to align the parameters. Experimental results demonstrate that the ELO method achieves a training speedup of up to 6.46 times compared to existing methods, while improving target language performance by up to 6.2% on qualitative benchmarks and effectively preserving source language (English) capabilities.
2025
Unified Automated Essay Scoring and Grammatical Error Correction
SeungWoo Song | Junghun Yuk | ChangSu Choi | HanGyeol Yoo | HyeonSeok Lim | KyungTae Lim | Jungyeul Park
Findings of the Association for Computational Linguistics: NAACL 2025
SeungWoo Song | Junghun Yuk | ChangSu Choi | HanGyeol Yoo | HyeonSeok Lim | KyungTae Lim | Jungyeul Park
Findings of the Association for Computational Linguistics: NAACL 2025
This study explores the integration of automated writing evaluation (AWE) and grammatical error correction (GEC) through multitask learning, demonstrating how combining these distinct tasks can enhance performance in both areas. By leveraging a shared learning framework, we show that models trained jointly on AWE and GEC outperform those trained on each task individually. To support this effort, we introduce a dataset specifically designed for multitask learning using AWE and GEC. Our experiments reveal significant synergies between tasks, leading to improvements in both writing assessment accuracy and error correction precision. This research represents a novel approach for optimizing language learning tools by unifying writing evaluation and correction tasks, offering insights into the potential of multitask learning in educational applications.
2024
X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment
DongJae Shin | HyeonSeok Lim | Inho Won | ChangSu Choi | Minjun Kim | SeungWoo Song | HanGyeol Yoo | SangMin Kim | KyungTae Lim
Findings of the Association for Computational Linguistics: NAACL 2024
DongJae Shin | HyeonSeok Lim | Inho Won | ChangSu Choi | Minjun Kim | SeungWoo Song | HanGyeol Yoo | SangMin Kim | KyungTae Lim
Findings of the Association for Computational Linguistics: NAACL 2024
The impressive development of large language models (LLMs) is expanding into the realm of large multimodal models (LMMs), which incorporate multiple types of data beyond text. However, the nature of multimodal models leads to significant expenses in the creation of training data. Furthermore, constructing multilingual data for LMMs presents its own set of challenges due to language diversity and complexity. Therefore, in this study, we propose two cost-effective methods to solve this problem: (1) vocabulary expansion and pretraining of multilingual LLM for specific languages, and (2) automatic and elaborate construction of multimodal datasets using GPT4-V. Based on these methods, we constructed a 91K English-Korean-Chinese multilingual, multimodal training dataset. Additionally, we developed a bilingual multimodal model that exhibits excellent performance in both Korean and English, surpassing existing approaches.