Synthetic Data for English Lexical Normalization: How Close Can We Get to Manually Annotated Data?

Kelly Dekker, Rob van der Goot


Abstract
Social media is a valuable data resource for various natural language processing (NLP) tasks. However, standard NLP tools were often designed with standard texts in mind, and their performance decreases heavily when applied to social media data. One solution to this problem is to adapt the input text to a more standard form, a task also referred to as normalization. Automatic approaches to normalization have shown that they can be used to improve performance on a variety of NLP tasks. However, all of these systems are supervised, thereby being heavily dependent on the availability of training data for the correct language and domain. In this work, we attempt to overcome this dependence by automatically generating training data for lexical normalization. Starting with raw tweets, we attempt two directions, to insert non-standardness (noise) and to automatically normalize in an unsupervised setting. Our best results are achieved by automatically inserting noise. We evaluate our approaches by using an existing lexical normalization system; our best scores are achieved by custom error generation system, which makes use of some manually created datasets. With this system, we score 94.29 accuracy on the test data, compared to 95.22 when it is trained on human-annotated data. Our best system which does not depend on any type of annotation is based on word embeddings and scores 92.04 accuracy. Finally, we perform an experiment in which we asked humans to predict whether a sentence was written by a human or generated by our best model. This experiment showed that in most cases it is hard for a human to detect automatically generated sentences.
Anthology ID:
2020.lrec-1.773
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
6300–6309
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.773
DOI:
Bibkey:
Cite (ACL):
Kelly Dekker and Rob van der Goot. 2020. Synthetic Data for English Lexical Normalization: How Close Can We Get to Manually Annotated Data?. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6300–6309, Marseille, France. European Language Resources Association.
Cite (Informal):
Synthetic Data for English Lexical Normalization: How Close Can We Get to Manually Annotated Data? (Dekker & van der Goot, LREC 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.lrec-1.773.pdf