Evaluating the Robustness of Neural Language Models to Input Perturbations

Milad Moradi, Matthias Samwald


Abstract
High-performance neural language models have obtained state-of-the-art results on a wide range of Natural Language Processing (NLP) tasks. However, results for common benchmark datasets often do not reflect model reliability and robustness when applied to noisy, real-world data. In this study, we design and implement various types of character-level and word-level perturbation methods to simulate realistic scenarios in which input texts may be slightly noisy or different from the data distribution on which NLP systems were trained. Conducting comprehensive experiments on different NLP tasks, we investigate the ability of high-performance language models such as BERT, XLNet, RoBERTa, and ELMo in handling different types of input perturbations. The results suggest that language models are sensitive to input perturbations and their performance can decrease even when small changes are introduced. We highlight that models need to be further improved and that current benchmarks are not reflecting model robustness well. We argue that evaluations on perturbed inputs should routinely complement widely-used benchmarks in order to yield a more realistic understanding of NLP systems’ robustness.
Anthology ID:
2021.emnlp-main.117
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1558–1570
Language:
URL:
https://aclanthology.org/2021.emnlp-main.117
DOI:
10.18653/v1/2021.emnlp-main.117
Bibkey:
Cite (ACL):
Milad Moradi and Matthias Samwald. 2021. Evaluating the Robustness of Neural Language Models to Input Perturbations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1558–1570, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Evaluating the Robustness of Neural Language Models to Input Perturbations (Moradi & Samwald, EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.117.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.117.mp4
Code
 mmoradi-iut/nlp-perturbation
Data
SST