A Simple Joint Model for Improved Contextual Neural Lemmatization

Chaitanya Malaviya, Shijie Wu, Ryan Cotterell


Abstract
English verbs have multiple forms. For instance, talk may also appear as talks, talked or talking, depending on the context. The NLP task of lemmatization seeks to map these diverse forms back to a canonical one, known as the lemma. We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages from the Universal Dependencies corpora. Our paper describes the model in addition to training and decoding procedures. Error analysis indicates that joint morphological tagging and lemmatization is especially helpful in low-resource lemmatization and languages that display a larger degree of morphological complexity.
Anthology ID:
N19-1155
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1517–1528
Language:
URL:
https://aclanthology.org/N19-1155
DOI:
10.18653/v1/N19-1155
Bibkey:
Cite (ACL):
Chaitanya Malaviya, Shijie Wu, and Ryan Cotterell. 2019. A Simple Joint Model for Improved Contextual Neural Lemmatization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1517–1528, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
A Simple Joint Model for Improved Contextual Neural Lemmatization (Malaviya et al., NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/N19-1155.pdf