Towards better translation performance on spoken language

Chao Bei, Hao Zong


Abstract
In this paper, we describe GTCOM’s neural machine translation(NMT) systems for the International Workshop on Spoken Language Translation(IWSLT) 2017. We participated in the English-to-Chinese and Chinese-to-English tracks in the small data condition of the bilingual task and the zero-shot condition of the multilingual task. Our systems are based on the encoder-decoder architecture with attention mechanism. We build byte pair encoding (BPE) models in parallel data and back-translated monolingual training data provided in the small data condition. Other techniques we explored in our system include two deep architectures, layer nomalization, weight normalization and training models with annealing Adam, etc. The official scores of English-to-Chinese, Chinese-to-English are 28.13 and 21.35 on test set 2016 and 28.30 and 22.16 on test set 2017. The official scores on German-to-Dutch, Dutch-to-German, Italian-to-Romanian and Romanian-to-Italian are 19.59, 17.95, 18.62 and 20.39 respectively.
Anthology ID:
2017.iwslt-1.7
Volume:
Proceedings of the 14th International Conference on Spoken Language Translation
Month:
December 14-15
Year:
2017
Address:
Tokyo, Japan
Editors:
Sakriani Sakti, Masao Utiyama
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
International Workshop on Spoken Language Translation
Note:
Pages:
48–54
Language:
URL:
https://aclanthology.org/2017.iwslt-1.7
DOI:
Bibkey:
Cite (ACL):
Chao Bei and Hao Zong. 2017. Towards better translation performance on spoken language. In Proceedings of the 14th International Conference on Spoken Language Translation, pages 48–54, Tokyo, Japan. International Workshop on Spoken Language Translation.
Cite (Informal):
Towards better translation performance on spoken language (Bei & Zong, IWSLT 2017)
Copy Citation:
PDF:
https://aclanthology.org/2017.iwslt-1.7.pdf