Analyzing Neural MT Search and Model Performance

Jan Niehues, Eunah Cho, Thanh-Le Ha, Alex Waibel


Abstract
In this paper, we offer an in-depth analysis about the modeling and search performance. We address the question if a more complex search algorithm is necessary. Furthermore, we investigate the question if more complex models which might only be applicable during rescoring are promising. By separating the search space and the modeling using n-best list reranking, we analyze the influence of both parts of an NMT system independently. By comparing differently performing NMT systems, we show that the better translation is already in the search space of the translation systems with less performance. This results indicate that the current search algorithms are sufficient for the NMT systems. Furthermore, we could show that even a relatively small n-best list of 50 hypotheses already contain notably better translations.
Anthology ID:
W17-3202
Volume:
Proceedings of the First Workshop on Neural Machine Translation
Month:
August
Year:
2017
Address:
Vancouver
Editors:
Thang Luong, Alexandra Birch, Graham Neubig, Andrew Finch
Venue:
NGT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–17
Language:
URL:
https://aclanthology.org/W17-3202
DOI:
10.18653/v1/W17-3202
Bibkey:
Cite (ACL):
Jan Niehues, Eunah Cho, Thanh-Le Ha, and Alex Waibel. 2017. Analyzing Neural MT Search and Model Performance. In Proceedings of the First Workshop on Neural Machine Translation, pages 11–17, Vancouver. Association for Computational Linguistics.
Cite (Informal):
Analyzing Neural MT Search and Model Performance (Niehues et al., NGT 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-3202.pdf