It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations

Samson Tan, Shafiq Joty, Min-Yen Kan, Richard Socher


Abstract
Training on only perfect Standard English corpora predisposes pre-trained neural networks to discriminate against minorities from non-standard linguistic backgrounds (e.g., African American Vernacular English, Colloquial Singapore English, etc.). We perturb the inflectional morphology of words to craft plausible and semantically similar adversarial examples that expose these biases in popular NLP models, e.g., BERT and Transformer, and show that adversarially fine-tuning them for a single epoch significantly improves robustness without sacrificing performance on clean data.
Anthology ID:
2020.acl-main.263
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2920–2935
Language:
URL:
https://aclanthology.org/2020.acl-main.263
DOI:
10.18653/v1/2020.acl-main.263
Bibkey:
Cite (ACL):
Samson Tan, Shafiq Joty, Min-Yen Kan, and Richard Socher. 2020. It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2920–2935, Online. Association for Computational Linguistics.
Cite (Informal):
It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations (Tan et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.263.pdf
Video:
 http://slideslive.com/38928803
Code
 salesforce/morpheus
Data
SQuAD