Word Representation Models for Morphologically Rich Languages in Neural Machine Translation

Ekaterina Vylomova, Trevor Cohn, Xuanli He, Gholamreza Haffari


Abstract
Out-of-vocabulary words present a great challenge for Machine Translation. Recently various character-level compositional models were proposed to address this issue. In current research we incorporate two most popular neural architectures, namely LSTM and CNN, into hard- and soft-attentional models of translation for character-level representation of the source. We propose semantic and morphological intrinsic evaluation of encoder-level representations. Our analysis of the learned representations reveals that character-based LSTM seems to be better at capturing morphological aspects compared to character-based CNN. We also show that hard-attentional model provides better character-level representations compared to vanilla one.
Anthology ID:
W17-4115
Volume:
Proceedings of the First Workshop on Subword and Character Level Models in NLP
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Manaal Faruqui, Hinrich Schuetze, Isabel Trancoso, Yadollah Yaghoobzadeh
Venue:
SCLeM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
103–108
Language:
URL:
https://aclanthology.org/W17-4115
DOI:
10.18653/v1/W17-4115
Bibkey:
Cite (ACL):
Ekaterina Vylomova, Trevor Cohn, Xuanli He, and Gholamreza Haffari. 2017. Word Representation Models for Morphologically Rich Languages in Neural Machine Translation. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 103–108, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Word Representation Models for Morphologically Rich Languages in Neural Machine Translation (Vylomova et al., SCLeM 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-4115.pdf