Fully Quantized Transformer for Machine Translation

Gabriele Prato, Ella Charlaix, Mehdi Rezagholizadeh


Abstract
State-of-the-art neural machine translation methods employ massive amounts of parameters. Drastically reducing computational costs of such methods without affecting performance has been up to this point unsuccessful. To this end, we propose FullyQT: an all-inclusive quantization strategy for the Transformer. To the best of our knowledge, we are the first to show that it is possible to avoid any loss in translation quality with a fully quantized Transformer. Indeed, compared to full-precision, our 8-bit models score greater or equal BLEU on most tasks. Comparing ourselves to all previously proposed methods, we achieve state-of-the-art quantization results.
Anthology ID:
2020.findings-emnlp.1
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–14
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.1
DOI:
10.18653/v1/2020.findings-emnlp.1
Bibkey:
Cite (ACL):
Gabriele Prato, Ella Charlaix, and Mehdi Rezagholizadeh. 2020. Fully Quantized Transformer for Machine Translation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1–14, Online. Association for Computational Linguistics.
Cite (Informal):
Fully Quantized Transformer for Machine Translation (Prato et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.1.pdf