MinKyu Kim
2026
TELLME: Test-Enhanced Learning for Language Model Enrichment
Minjun Kim | Inho Won | HyeonSeok Lim | MinKyu Kim | Junghun Yuk | Wooyoung Go | Jongyoul Park | Jungyeul Park | KyungTae Lim
Findings of the Association for Computational Linguistics: EACL 2026
Minjun Kim | Inho Won | HyeonSeok Lim | MinKyu Kim | Junghun Yuk | Wooyoung Go | Jongyoul Park | Jungyeul Park | KyungTae Lim
Findings of the Association for Computational Linguistics: EACL 2026
Continual pre-training (CPT) has been widely adopted as a method for domain expansion in large language models. However, CPT has consistently been accompanied by challenges, such as the difficulty of acquiring large-scale domain-specific datasets and high computational costs. In this study, we propose a novel method called Test-Enhanced Learning for Language Model Enrichment (TELLME) to alleviate these issues. TELLME leverages the Test-Enhanced Learning (TEL) principle, whereby the model’s learning efficiency is improved using quizzes during training. It integrates this principle with CPT, thereby promoting efficient domain-specific knowledge acquisition and long-term memory retention. Experimental results demonstrate that TELLME outperforms existing methods by up to 23.6% in the financial domain and achieves a 9.8% improvement in long-term memory retention.