Heejeong Jeon
2026
Unsupervised Detection of LLM-Generated Text in Korean Using Syntactic and Semantic Cues
Heejeong Jeon | MinSu Park | YunSeok Choi | Eunil Park
Findings of the Association for Computational Linguistics: EACL 2026
Heejeong Jeon | MinSu Park | YunSeok Choi | Eunil Park
Findings of the Association for Computational Linguistics: EACL 2026
As Large Language Models (LLMs) are increasingly used for content creation, detecting AI-generated text has become a critical challenge. Prior work has largely focused on English, leaving low-resource languages such as Korean underexplored. We propose an unsupervised detection framework that integrates two complementary signals: syntactic token cohesiveness (TOCSIN) and semantic regeneration similarity (SimLLM). To support evaluation, we construct a Korean pairwise dataset of 1,000 anchors with continuation- and regeneration-style generations and further assess performance across domains (news, research paper abstracts, essays) and model families (GPT-3.5 Turbo, GPT-4o, HyperCLOVA X, LLaMA-3-8B). Without any training, our ensemble achieves up to 0.963 F1 and 0.985 ROC-AUC, outperforming baselines. These results demonstrate that the combination of syntactic and semantic cues enables robust unsupervised detection in low-resource settings. Code available at https://github.com/dxlabskku/llm-detection-main.