Multi-view Subword Regularization

Xinyi Wang, Sebastian Ruder, Graham Neubig


Abstract
Multilingual pretrained representations generally rely on subword segmentation algorithms to create a shared multilingual vocabulary. However, standard heuristic algorithms often lead to sub-optimal segmentation, especially for languages with limited amounts of data. In this paper, we take two major steps towards alleviating this problem. First, we demonstrate empirically that applying existing subword regularization methods (Kudo, 2018; Provilkov et al., 2020) during fine-tuning of pre-trained multilingual representations improves the effectiveness of cross-lingual transfer. Second, to take full advantage of different possible input segmentations, we propose Multi-view Subword Regularization (MVR), a method that enforces the consistency of predictors between using inputs tokenized by the standard and probabilistic segmentations. Results on the XTREME multilingual benchmark (Hu et al., 2020) show that MVR brings consistent improvements of up to 2.5 points over using standard segmentation algorithms.
Anthology ID:
2021.naacl-main.40
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
473–482
Language:
URL:
https://aclanthology.org/2021.naacl-main.40
DOI:
10.18653/v1/2021.naacl-main.40
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.40.pdf
Optional supplementary data:
 2021.naacl-main.40.OptionalSupplementaryData.zip