Chenhao Niu
2024
Detecting LLM-Assisted Cheating on Open-Ended Writing Tasks on Language Proficiency Tests
Chenhao Niu
|
Kevin P. Yancey
|
Ruidong Liu
|
Mirza Basim Baig
|
André Kenji Horie
|
James Sharpnack
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
The high capability of recent Large Language Models (LLMs) has led to concerns about possible misuse as cheating assistants in open-ended writing tasks in assessments. Although various detecting methods have been proposed, most of them have not been evaluated on or optimized for real-world samples from LLM-assisted cheating, where the generated text is often copy-typed imperfectly by the test-taker. In this paper, we present a framework for training LLM-generated text detectors that can effectively detect LLM-generated samples after being copy-typed. We enhance the existing transformer-based classifier training process with contrastive learning on constructed pairwise data and self-training on unlabeled data, and evaluate the improvements on a real-world dataset from the Duolingo English Test (DET), a high-stakes online English proficiency test. Our experiments demonstrate that the improved model outperforms the original transformer-based classifier and other baselines.
Search