A Curious Case of Searching for the Correlation between Training Data and Adversarial Robustness of Transformer Textual Models

Dang Cuong, Dung Le, Thai Le


Abstract
Existing works have shown that fine-tuned textual transformer models achieve state-of-the-art prediction performances but are also vulnerable to adversarial text perturbations. Traditional adversarial evaluation is often done only after fine-tuning the models and ignoring the training data. In this paper, we want to prove that there is also a strong correlation between training data and model robustness. To this end, we extract 13 different features representing a wide range of input fine-tuning corpora properties and use them to predict the adversarial robustness of the fine-tuned models. Focusing mostly on encoder-only transformer models BERT and RoBERTa with additional results for BART, ELECTRA and GPT2, we provide diverse evidence to support our argument. First, empirical analyses show that (a) extracted features can be used with a lightweight classifier such as Random Forest to effectively predict the attack success rate and (b) features with the most influence on the model robustness have a clear correlation with the robustness. Second, our framework can be used as a fast and effective additional tool for robustness evaluation since it (a) saves 30x-193x runtime compared to the traditional technique, (b) is transferable across models, (c) can be used under adversarial training, and (d) robust to statistical randomness. Our code is publicly available at https://github.com/CaptainCuong/RobustText_ACL2024.
Anthology ID:
2024.findings-acl.800
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13475–13491
Language:
URL:
https://aclanthology.org/2024.findings-acl.800
DOI:
10.18653/v1/2024.findings-acl.800
Bibkey:
Cite (ACL):
Dang Cuong, Dung Le, and Thai Le. 2024. A Curious Case of Searching for the Correlation between Training Data and Adversarial Robustness of Transformer Textual Models. In Findings of the Association for Computational Linguistics: ACL 2024, pages 13475–13491, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
A Curious Case of Searching for the Correlation between Training Data and Adversarial Robustness of Transformer Textual Models (Cuong et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.800.pdf