RTM Ensemble Learning Results at Quality Estimation Task

Ergun Biçici


Abstract
We obtain new results using referential translation machines (RTMs) with predictions mixed and stacked to obtain a better mixture of experts prediction. We are able to achieve better results than the baseline model in Task 1 subtasks. Our stacking results significantly improve the results on the training sets but decrease the test set results. RTMs can achieve to become the 5th among 13 models in ru-en subtask and 5th in the multilingual track of sentence-level Task 1 based on MAE.
Anthology ID:
2020.wmt-1.114
Volume:
Proceedings of the Fifth Conference on Machine Translation
Month:
November
Year:
2020
Address:
Online
Editors:
Loïc Barrault, Ondřej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Alexander Fraser, Yvette Graham, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Makoto Morishita, Christof Monz, Masaaki Nagata, Toshiaki Nakazawa, Matteo Negri
Venue:
WMT
SIG:
SIGMT
Publisher:
Association for Computational Linguistics
Note:
Pages:
999–1003
Language:
URL:
https://aclanthology.org/2020.wmt-1.114
DOI:
Bibkey:
Cite (ACL):
Ergun Biçici. 2020. RTM Ensemble Learning Results at Quality Estimation Task. In Proceedings of the Fifth Conference on Machine Translation, pages 999–1003, Online. Association for Computational Linguistics.
Cite (Informal):
RTM Ensemble Learning Results at Quality Estimation Task (Biçici, WMT 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.wmt-1.114.pdf
Video:
 https://slideslive.com/38939628