Impeding LLM-assisted Cheating in Introductory Programming Assignments via Adversarial Perturbation

Saiful Salim, Rubin Yang, Alexander Cooper, Suryashree Ray, Saumya Debray, Sazzadur Rahaman


Abstract
While Large language model (LLM)-based programming assistants such as CoPilot and ChatGPT can help improve the productivity of professional software developers, they can also facilitate cheating in introductory computer programming courses. Assuming instructors have limited control over the industrial-strength models, this paper investigates the baseline performance of 5 widely used LLMs on a collection of introductory programming problems, examines adversarial perturbations to degrade their performance, and describes the results of a user study aimed at measuring the efficacy of such perturbations in hindering actual code generation for introductory programming assignments. The user study suggests that i) perturbations combinedly reduced the average correctness score by 77%, ii) the drop in correctness caused by these perturbations was affected based on their detectability.
Anthology ID:
2024.emnlp-main.27
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
445–463
Language:
URL:
https://aclanthology.org/2024.emnlp-main.27
DOI:
Bibkey:
Cite (ACL):
Saiful Salim, Rubin Yang, Alexander Cooper, Suryashree Ray, Saumya Debray, and Sazzadur Rahaman. 2024. Impeding LLM-assisted Cheating in Introductory Programming Assignments via Adversarial Perturbation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 445–463, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Impeding LLM-assisted Cheating in Introductory Programming Assignments via Adversarial Perturbation (Salim et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.27.pdf