RAFT: Realistic Attacks to Fool Text Detectors

James Wang, Ran Li, Junfeng Yang, Chengzhi Mao


Abstract
Large language models (LLMs) have exhibited remarkable fluency across various tasks. However, their unethical applications, such as disseminating disinformation, have become a growing concern. Although recent works have proposed a number of LLM detection methods, their robustness and reliability remain unclear. In this paper, we present RAFT: a grammar error-free black-box attack against existing LLM detectors. In contrast to previous attacks for language models, our method exploits the transferability of LLM embeddings at the word-level while preserving the original text quality. We leverage an auxiliary embedding to greedily select candidate words to perturb against the target detector. Experiments reveal that our attack effectively compromises all detectors in the study across various domains by up to 99%, and are transferable across source models. Manual human evaluation studies show our attacks are realistic and indistinguishable from original human-written text. We also show that examples generated by RAFT can be used to train adversarially robust detectors. Our work shows that current LLM detectors are not adversarially robust, underscoring the urgent need for more resilient detection mechanisms.
Anthology ID:
2024.emnlp-main.939
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16923–16936
Language:
URL:
https://aclanthology.org/2024.emnlp-main.939
DOI:
Bibkey:
Cite (ACL):
James Wang, Ran Li, Junfeng Yang, and Chengzhi Mao. 2024. RAFT: Realistic Attacks to Fool Text Detectors. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 16923–16936, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
RAFT: Realistic Attacks to Fool Text Detectors (Wang et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.939.pdf