ViDeBERTa: A powerful pre-trained language model for Vietnamese

Cong Dao Tran, Nhut Huy Pham, Anh Tuan Nguyen, Truong Son Hy, Tu Vu


Abstract
This paper presents ViDeBERTa, a new pre-trained monolingual language model for Vietnamese, with three versions - ViDeBERTa_xsmall, ViDeBERTa_base, and ViDeBERTa_large, which are pre-trained on a large-scale corpus of high-quality and diverse Vietnamese texts using DeBERTa architecture. Although many successful pre-trained language models based on Transformer have been widely proposed for the English language, there are still few pre-trained models for Vietnamese, a low-resource language, that perform good results on downstream tasks, especially Question answering. We fine-tune and evaluate our model on three important natural language downstream tasks, Part-of-speech tagging, Named-entity recognition, and Question answering. The empirical results demonstrate that ViDeBERTa with far fewer parameters surpasses the previous state-of-the-art models on multiple Vietnamese-specific natural language understanding tasks. Notably, ViDeBERTa_base with 86M parameters, which is only about 23% of PhoBERT_large with 370M parameters, still performs the same or better results than the previous state-of-the-art model. Our ViDeBERTa models are available at: https://github.com/HySonLab/ViDeBERTa.
Anthology ID:
2023.findings-eacl.79
Volume:
Findings of the Association for Computational Linguistics: EACL 2023
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1071–1078
Language:
URL:
https://aclanthology.org/2023.findings-eacl.79
DOI:
10.18653/v1/2023.findings-eacl.79
Bibkey:
Cite (ACL):
Cong Dao Tran, Nhut Huy Pham, Anh Tuan Nguyen, Truong Son Hy, and Tu Vu. 2023. ViDeBERTa: A powerful pre-trained language model for Vietnamese. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1071–1078, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
ViDeBERTa: A powerful pre-trained language model for Vietnamese (Tran et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-eacl.79.pdf
Video:
 https://aclanthology.org/2023.findings-eacl.79.mp4