Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation

Siyu Lai, Zhen Yang, Fandong Meng, Xue Zhang, Yufeng Chen, Jinan Xu, Jie Zhou


Abstract
Generating adversarial examples for Neural Machine Translation (NMT) with single Round-Trip Translation (RTT) has achieved promising results by releasing the meaning-preserving restriction. However, a potential pitfall for this approach is that we cannot decide whether the generated examples are adversarial to the target NMT model or the auxiliary backward one, as the reconstruction error through the RTT can be related to either. To remedy this problem, we propose a new definition for NMT adversarial examples based on the Doubly Round-Trip Translation (DRTT). Specifically, apart from the source-target-source RTT, we also consider the target-source-target one, which is utilized to pick out the authentic adversarial examples for the target NMT model. Additionally, to enhance the robustness of the NMT model, we introduce the masked language models to construct bilingual adversarial pairs based on DRTT, which are used to train the NMT model directly. Extensive experiments on both the clean and noisy test sets (including the artificial and natural noise) show that our approach substantially improves the robustness of NMT models.
Anthology ID:
2022.naacl-main.316
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4256–4266
Language:
URL:
https://aclanthology.org/2022.naacl-main.316
DOI:
10.18653/v1/2022.naacl-main.316
Bibkey:
Cite (ACL):
Siyu Lai, Zhen Yang, Fandong Meng, Xue Zhang, Yufeng Chen, Jinan Xu, and Jie Zhou. 2022. Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4256–4266, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation (Lai et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.316.pdf