Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots

Samson Tan, Shafiq Joty


Abstract
Multilingual models have demonstrated impressive cross-lingual transfer performance. However, test sets like XNLI are monolingual at the example level. In multilingual communities, it is common for polyglots to code-mix when conversing with each other. Inspired by this phenomenon, we present two strong black-box adversarial attacks (one word-level, one phrase-level) for multilingual models that push their ability to handle code-mixed sentences to the limit. The former uses bilingual dictionaries to propose perturbations and translations of the clean example for sense disambiguation. The latter directly aligns the clean example with its translations before extracting phrases as perturbations. Our phrase-level attack has a success rate of 89.75% against XLM-R-large, bringing its average accuracy of 79.85 down to 8.18 on XNLI. Finally, we propose an efficient adversarial training scheme that trains in the same number of steps as the original model and show that it creates more language-invariant representations, improving clean and robust accuracy in the absence of lexical overlap without degrading performance on the original examples.
Anthology ID:
2021.naacl-main.282
Original:
2021.naacl-main.282v1
Version 2:
2021.naacl-main.282v2
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3596–3616
Language:
URL:
https://aclanthology.org/2021.naacl-main.282
DOI:
10.18653/v1/2021.naacl-main.282
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.282.pdf