Understanding Model Robustness to User-generated Noisy Texts

Jakub Náplava, Martin Popel, Milan Straka, Jana Straková


Abstract
Sensitivity of deep-neural models to input noise is known to be a challenging problem. In NLP, model performance often deteriorates with naturally occurring noise, such as spelling errors. To mitigate this issue, models may leverage artificially noised data. However, the amount and type of generated noise has so far been determined arbitrarily. We therefore propose to model the errors statistically from grammatical-error-correction corpora. We present a thorough evaluation of several state-of-the-art NLP systems’ robustness in multiple languages, with tasks including morpho-syntactic analysis, named entity recognition, neural machine translation, a subset of the GLUE benchmark and reading comprehension. We also compare two approaches to address the performance drop: a) training the NLP models with noised data generated by our framework; and b) reducing the input noise with external system for natural language correction. The code is released at https://github.com/ufal/kazitext.
Anthology ID:
2021.wnut-1.38
Volume:
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
Month:
November
Year:
2021
Address:
Online
Editors:
Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
Venue:
WNUT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
340–350
Language:
URL:
https://aclanthology.org/2021.wnut-1.38
DOI:
10.18653/v1/2021.wnut-1.38
Bibkey:
Cite (ACL):
Jakub Náplava, Martin Popel, Milan Straka, and Jana Straková. 2021. Understanding Model Robustness to User-generated Noisy Texts. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 340–350, Online. Association for Computational Linguistics.
Cite (Informal):
Understanding Model Robustness to User-generated Noisy Texts (Náplava et al., WNUT 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.wnut-1.38.pdf
Code
 ufal/kazitext
Data
AKCES-GECFCEGLUE