Small Batch Sizes Improve Training of Low-Resource Neural MT

Àlex Atrio, Andrei Popescu-Belis


Abstract
We study the role of an essential hyper-parameter that governs the training of Transformers for neural machine translation in a low-resource setting: the batch size. Using theoretical insights and experimental evidence, we argue against the widespread belief that batch size should be set as large as allowed by the memory of the GPUs. We show that in a low-resource setting, a smaller batch size leads to higher scores in a shorter training time, and argue that this is due to better regularization of the gradients during training.
Anthology ID:
2021.icon-main.4
Volume:
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2021
Address:
National Institute of Technology Silchar, Silchar, India
Editors:
Sivaji Bandyopadhyay, Sobha Lalitha Devi, Pushpak Bhattacharyya
Venue:
ICON
SIG:
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
18–24
Language:
URL:
https://aclanthology.org/2021.icon-main.4
DOI:
Bibkey:
Cite (ACL):
Àlex Atrio and Andrei Popescu-Belis. 2021. Small Batch Sizes Improve Training of Low-Resource Neural MT. In Proceedings of the 18th International Conference on Natural Language Processing (ICON), pages 18–24, National Institute of Technology Silchar, Silchar, India. NLP Association of India (NLPAI).
Cite (Informal):
Small Batch Sizes Improve Training of Low-Resource Neural MT (Atrio & Popescu-Belis, ICON 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.icon-main.4.pdf