Not Enough Data to Pre-train Your Language Model? MT to the Rescue!

Gorka Urbizu, Iñaki San Vicente, Xabier Saralegi, Ander Corral


Abstract
In recent years, pre-trained transformer-based language models (LM) have become a key resource for implementing most NLP tasks. However, pre-training such models demands large text collections not available in most languages. In this paper, we study the use of machine-translated corpora for pre-training LMs. We answer the following research questions: RQ1: Is MT-based data an alternative to real data for learning a LM?; RQ2: Can real data be complemented with translated data and improve the resulting LM? In order to validate these two questions, several BERT models for Basque have been trained, combining real data and synthetic data translated from Spanish.The evaluation carried out on 9 NLU tasks indicates that models trained exclusively on translated data offer competitive results. Furthermore, models trained with real data can be improved with synthetic data, although further research is needed on the matter.
Anthology ID:
2023.findings-acl.235
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3826–3836
Language:
URL:
https://aclanthology.org/2023.findings-acl.235
DOI:
10.18653/v1/2023.findings-acl.235
Bibkey:
Cite (ACL):
Gorka Urbizu, Iñaki San Vicente, Xabier Saralegi, and Ander Corral. 2023. Not Enough Data to Pre-train Your Language Model? MT to the Rescue!. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3826–3836, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Not Enough Data to Pre-train Your Language Model? MT to the Rescue! (Urbizu et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.235.pdf