Catch Me If You GPT: Tutorial on Deepfake Texts

Adaku Uchendu, Saranya Venkatraman, Thai Le, Dongwon Lee


Abstract
In recent years, Natural Language Generation (NLG) techniques have greatly advanced, especially in the realm of Large Language Models (LLMs). With respect to the quality of generated texts, it is no longer trivial to tell the difference between human-written and LLMgenerated texts (i.e., deepfake texts). While this is a celebratory feat for NLG, it poses new security risks (e.g., the generation of misinformation). To combat this novel challenge, researchers have developed diverse techniques to detect deepfake texts. While this niche field of deepfake text detection is growing, the field of NLG is growing at a much faster rate, thus making it difficult to understand the complex interplay between state-of-the-art NLG methods and the detectability of their generated texts. To understand such inter-play, two new computational problems emerge: (1) Deepfake Text Attribution (DTA) and (2) Deepfake Text Obfuscation (DTO) problems, where the DTA problem is concerned with attributing the authorship of a given text to one of k NLG methods, while the DTO problem is to evade the authorship of a given text by modifying parts of the text. In this cutting-edge tutorial, therefore, we call attention to the serious security risk both emerging problems pose and give a comprehensive review of recent literature on the detection and obfuscation of deepfake text authorships. Our tutorial will be 3 hours long with a mix of lecture and hands-on examples for interactive audience participation. You can find our tutorial materials here: https://tinyurl.com/naacl24-tutorial.
Anthology ID:
2024.naacl-tutorials.1
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Rui Zhang, Nathan Schneider, Snigdha Chaturvedi
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–7
Language:
URL:
https://aclanthology.org/2024.naacl-tutorials.1
DOI:
10.18653/v1/2024.naacl-tutorials.1
Bibkey:
Cite (ACL):
Adaku Uchendu, Saranya Venkatraman, Thai Le, and Dongwon Lee. 2024. Catch Me If You GPT: Tutorial on Deepfake Texts. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts), pages 1–7, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Catch Me If You GPT: Tutorial on Deepfake Texts (Uchendu et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-tutorials.1.pdf