Weiming Zhang
2024
Text Fluoroscopy: Detecting LLM-Generated Text through Intrinsic Features
Xiao Yu
|
Kejiang Chen
|
Qi Yang
|
Weiming Zhang
|
Nenghai Yu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) have revolutionized the domain of natural language processing because of their excellent performance on various tasks. Despite their impressive capabilities, LLMs also have the potential to generate texts that pose risks of misuse. Consequently, detecting LLM-generated text has become increasingly important.Previous LLM-generated text detection methods use semantic features, which are stored in the last layer. This leads to methods that overfit the training set domain and exhibit shortcomings in generalization. Therefore, We argue that utilizing intrinsic features rather than semantic features for detection results in better performance.In this work, we design Text Fluoroscopy, a black-box method with better generalizability for detecting LLM-generated text by mining the intrinsic features of the text to be detected. Our method captures the text’s intrinsic features by identifying the layer with the largest distribution difference from the last and first layers when projected to the vocabulary space.Our method achieves 7.36% and 2.84% average improvement in detection performance compared to the baselines in detecting texts from different domains generated by GPT-4 and Claude3, respectively.
2021
Sociolectal Analysis of Pretrained Language Models
Sheng Zhang
|
Xin Zhang
|
Weiming Zhang
|
Anders Søgaard
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Using data from English cloze tests, in which subjects also self-reported their gender, age, education, and race, we examine performance differences of pretrained language models across demographic groups, defined by these (protected) attributes. We demonstrate wide performance gaps across demographic groups and show that pretrained language models systematically disfavor young non-white male speakers; i.e., not only do pretrained language models learn social biases (stereotypical associations) – pretrained language models also learn sociolectal biases, learning to speak more like some than like others. We show, however, that, with the exception of BERT models, larger pretrained language models reduce some the performance gaps between majority and minority groups.
Search
Co-authors
- Xiao Yu 1
- Kejiang Chen 1
- Qi Yang 1
- Nenghai Yu 1
- Sheng Zhang 1
- show all...