PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts

Xiangjue Dong, Yun He, Ziwei Zhu, James Caverlee


Abstract
A key component of modern conversational systems is the Dialogue State Tracker (or DST), which models a user’s goals and needs. Toward building more robust and reliable DSTs, we introduce a prompt-based learning approach to automatically generate effective adversarial examples to probe DST models. Two key characteristics of this approach are: (i) it only needs the output of the DST with no need for model parameters, and (ii) it can learn to generate natural language utterances that can target any DST. Through experiments over state-of-the-art DSTs, the proposed framework leads to the greatest reduction in accuracy and the best attack success rate while maintaining good fluency and a low perturbation ratio. We also show how much the generated adversarial examples can bolster a DST through adversarial training. These results indicate the strength of prompt-based attacks on DSTs and leave open avenues for continued refinement.
Anthology ID:
2023.findings-acl.677
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10651–10666
Language:
URL:
https://aclanthology.org/2023.findings-acl.677
DOI:
10.18653/v1/2023.findings-acl.677
Bibkey:
Cite (ACL):
Xiangjue Dong, Yun He, Ziwei Zhu, and James Caverlee. 2023. PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10651–10666, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts (Dong et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.677.pdf