Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image Generation

G Shahariar, Jia Chen, Jiachen Li, Yue Dong


Abstract
Recent studies show that text-to-image (T2I) models are vulnerable to adversarial attacks, especially with noun perturbations in text prompts. In this study, we investigate the impact of adversarial attacks on different POS tags within text prompts on the images generated by T2I models. We create a high-quality dataset for realistic POS tag token swapping and perform gradient-based attacks to find adversarial suffixes that mislead T2I models into generating images with altered tokens. Our empirical results show that the attack success rate (ASR) varies significantly among different POS tag categories, with nouns, proper nouns, and adjectives being the easiest to attack. We explore the mechanism behind the steering effect of adversarial suffixes, finding that the number of critical tokens and information fusion vary among POS tags, while features like suffix transferability are consistent across categories.
Anthology ID:
2024.findings-emnlp.753
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12874–12890
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.753
DOI:
Bibkey:
Cite (ACL):
G Shahariar, Jia Chen, Jiachen Li, and Yue Dong. 2024. Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image Generation. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 12874–12890, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image Generation (Shahariar et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.753.pdf