LinkPrompt: Natural and Universal Adversarial Attacks on Prompt-based Language Models

Yue Xu, Wenjie Wang


Abstract
Prompt-based learning is a new language model training paradigm that adapts the Pre-trained Language Models (PLMs) to downstream tasks, which revitalizes the performance benchmarks across various natural language processing (NLP) tasks. Instead of using a fixed prompt template to fine-tune the model, some research demonstrates the effectiveness of searching for the prompt via optimization. Such prompt optimization process of prompt-based learning on PLMs also gives insight into generating adversarial prompts to mislead the model, raising concerns about the adversarial vulnerability of this paradigm. Recent studies have shown that universal adversarial triggers (UATs) can be generated to alter not only the predictions of the target PLMs but also the prediction of corresponding Prompt-based Fine-tuning Models (PFMs) under the prompt-based learning paradigm. However, UATs found in previous works are often unreadable tokens or characters and can be easily distinguished from natural texts with adaptive defenses. In this work, we consider the naturalness of the UATs and develop LinkPrompt, an adversarial attack algorithm to generate UATs by a gradient-based beam search algorithm that not only effectively attacks the target PLMs and PFMs but also maintains the naturalness among the trigger tokens. Extensive results demonstrate the effectiveness of LinkPrompt, as well as the transferability of UATs generated by LinkPrompt to open-sourced Large Language Model (LLM) Llama2 and API-accessed LLM GPT-3.5-turbo. The resource is available at https://github.com/SavannahXu79/LinkPrompt.
Anthology ID:
2024.naacl-long.360
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6473–6486
Language:
URL:
https://aclanthology.org/2024.naacl-long.360
DOI:
Bibkey:
Cite (ACL):
Yue Xu and Wenjie Wang. 2024. LinkPrompt: Natural and Universal Adversarial Attacks on Prompt-based Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 6473–6486, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
LinkPrompt: Natural and Universal Adversarial Attacks on Prompt-based Language Models (Xu & Wang, NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.360.pdf
Copyright:
 2024.naacl-long.360.copyright.pdf