Compact and Robust Models for Japanese-English Character-level Machine Translation

Jinan Dai, Kazunori Yamaguchi


Abstract
Character-level translation has been proved to be able to achieve preferable translation quality without explicit segmentation, but training a character-level model needs a lot of hardware resources. In this paper, we introduced two character-level translation models which are mid-gated model and multi-attention model for Japanese-English translation. We showed that the mid-gated model achieved the better performance with respect to BLEU scores. We also showed that a relatively narrow beam of width 4 or 5 was sufficient for the mid-gated model. As for unknown words, we showed that the mid-gated model could somehow translate the one containing Katakana by coining out a close word. We also showed that the model managed to produce tolerable results for heavily noised sentences, even though the model was trained with the dataset without noise.
Anthology ID:
D19-5202
Volume:
Proceedings of the 6th Workshop on Asian Translation
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Toshiaki Nakazawa, Chenchen Ding, Raj Dabre, Anoop Kunchukuttan, Nobushige Doi, Yusuke Oda, Ondřej Bojar, Shantipriya Parida, Isao Goto, Hidaya Mino
Venue:
WAT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
36–44
Language:
URL:
https://aclanthology.org/D19-5202
DOI:
10.18653/v1/D19-5202
Bibkey:
Cite (ACL):
Jinan Dai and Kazunori Yamaguchi. 2019. Compact and Robust Models for Japanese-English Character-level Machine Translation. In Proceedings of the 6th Workshop on Asian Translation, pages 36–44, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Compact and Robust Models for Japanese-English Character-level Machine Translation (Dai & Yamaguchi, WAT 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-5202.pdf