Adversarial Textual Robustness on Visual Dialog

Lu Yu, Verena Rieser


Abstract
Adversarial robustness evaluates the worst-case performance scenario of a machine learning model to ensure its safety and reliability. For example, cases where the user input contains a minimal change, e.g. a synonym, which causes the previously correct model to return a wrong answer. Using this scenario, this study is the first to investigate the robustness of visually grounded dialog models towards textual attacks. We first aim to understand how multimodal input components contribute to model robustness. Our results show that models which encode dialog history are more robust by providing redundant information. This is in contrast to prior work which finds that dialog history is negligible for model performance on this task. We also evaluate how to generate adversarial test examples which successfully fool the model but remain undetected by the user/software designer. Our analysis shows that the textual, as well as the visual context are important to generate plausible attacks.
Anthology ID:
2023.findings-acl.212
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3422–3438
Language:
URL:
https://aclanthology.org/2023.findings-acl.212
DOI:
10.18653/v1/2023.findings-acl.212
Bibkey:
Cite (ACL):
Lu Yu and Verena Rieser. 2023. Adversarial Textual Robustness on Visual Dialog. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3422–3438, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Adversarial Textual Robustness on Visual Dialog (Yu & Rieser, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.212.pdf