How far can we get with one GPU in 100 hours? CoAStaL at MultiIndicMT Shared Task

Rahul Aralikatte, Héctor Ricardo Murrieta Bello, Miryam de Lhoneux, Daniel Hershcovich, Marcel Bollmann, Anders Søgaard


Abstract
This work shows that competitive translation results can be obtained in a constrained setting by incorporating the latest advances in memory and compute optimization. We train and evaluate large multilingual translation models using a single GPU for a maximum of 100 hours and get within 4-5 BLEU points of the top submission on the leaderboard. We also benchmark standard baselines on the PMI corpus and re-discover well-known shortcomings of translation systems and metrics.
Anthology ID:
2021.wat-1.24
Volume:
Proceedings of the 8th Workshop on Asian Translation (WAT2021)
Month:
August
Year:
2021
Address:
Online
Venues:
ACL | IJCNLP | WAT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
205–211
Language:
URL:
https://aclanthology.org/2021.wat-1.24
DOI:
10.18653/v1/2021.wat-1.24
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.wat-1.24.pdf