Unlabeled Data for Morphological Generation With Character-Based Sequence-to-Sequence Models

Katharina Kann, Hinrich Schütze


Abstract
We present a semi-supervised way of training a character-based encoder-decoder recurrent neural network for morphological reinflection—the task of generating one inflected wordform from another. This is achieved by using unlabeled tokens or random strings as training data for an autoencoding task, adapting a network for morphological reinflection, and performing multi-task training. We thus use limited labeled data more effectively, obtaining up to 9.92% improvement over state-of-the-art baselines for 8 different languages.
Anthology ID:
W17-4111
Volume:
Proceedings of the First Workshop on Subword and Character Level Models in NLP
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Manaal Faruqui, Hinrich Schuetze, Isabel Trancoso, Yadollah Yaghoobzadeh
Venue:
SCLeM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
76–81
Language:
URL:
https://aclanthology.org/W17-4111
DOI:
10.18653/v1/W17-4111
Bibkey:
Cite (ACL):
Katharina Kann and Hinrich Schütze. 2017. Unlabeled Data for Morphological Generation With Character-Based Sequence-to-Sequence Models. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 76–81, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Unlabeled Data for Morphological Generation With Character-Based Sequence-to-Sequence Models (Kann & Schütze, SCLeM 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-4111.pdf