Two Heads are Better than One? Verification of Ensemble Effect in Neural Machine Translation

Chanjun Park, Sungjin Park, Seolhwa Lee, Taesun Whang, Heuiseok Lim


Abstract
In the field of natural language processing, ensembles are broadly known to be effective in improving performance. This paper analyzes how ensemble of neural machine translation (NMT) models affect performance improvement by designing various experimental setups (i.e., intra-, inter-ensemble, and non-convergence ensemble). To an in-depth examination, we analyze each ensemble method with respect to several aspects such as different attention models and vocab strategies. Experimental results show that ensembling is not always resulting in performance increases and give noteworthy negative findings.
Anthology ID:
2021.insights-1.4
Volume:
Proceedings of the Second Workshop on Insights from Negative Results in NLP
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
João Sedoc, Anna Rogers, Anna Rumshisky, Shabnam Tafreshi
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
23–28
Language:
URL:
https://aclanthology.org/2021.insights-1.4
DOI:
10.18653/v1/2021.insights-1.4
Bibkey:
Cite (ACL):
Chanjun Park, Sungjin Park, Seolhwa Lee, Taesun Whang, and Heuiseok Lim. 2021. Two Heads are Better than One? Verification of Ensemble Effect in Neural Machine Translation. In Proceedings of the Second Workshop on Insights from Negative Results in NLP, pages 23–28, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Two Heads are Better than One? Verification of Ensemble Effect in Neural Machine Translation (Park et al., insights 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.insights-1.4.pdf
Video:
 https://aclanthology.org/2021.insights-1.4.mp4