Foiling Training-Time Attacks on Neural Machine Translation Systems

Jun Wang, Xuanli He, Benjamin Rubinstein, Trevor Cohn


Abstract
Neural machine translation (NMT) systems are vulnerable to backdoor attacks, whereby an attacker injects poisoned samples into training such that a trained model produces malicious translations. Nevertheless, there is little research on defending against such backdoor attacks in NMT. In this paper, we first show that backdoor attacks that have been successful in text classification are also effective against machine translation tasks. We then present a novel defence method that exploits a key property of most backdoor attacks: namely the asymmetry between the source and target language sentences, which is used to facilitate malicious text insertions, substitutions and suchlike. Our technique uses word alignment coupled with language model scoring to detect outlier tokens, and thus can find and filter out training instances which may contain backdoors. Experimental results demonstrate that our technique can significantly reduce the success of various attacks by up to 89.0%, while not affecting predictive accuracy.
Anthology ID:
2022.findings-emnlp.435
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5906–5913
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.435
DOI:
10.18653/v1/2022.findings-emnlp.435
Bibkey:
Cite (ACL):
Jun Wang, Xuanli He, Benjamin Rubinstein, and Trevor Cohn. 2022. Foiling Training-Time Attacks on Neural Machine Translation Systems. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5906–5913, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Foiling Training-Time Attacks on Neural Machine Translation Systems (Wang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.435.pdf