Examining Modularity in Multilingual LMs via Language-Specialized Subnetworks

Rochelle Choenni, Ekaterina Shutova, Dan Garrette


Abstract
Recent work has proposed explicitly inducing language-wise modularity in multilingual LMs via sparse fine-tuning (SFT) on per-language subnetworks as a means of better guiding cross-lingual sharing. In this paper, we investigate (1) the degree to which language-wise modularity *naturally* arises within models with no special modularity interventions, and (2) how cross-lingual sharing and interference differ between such models and those with explicit SFT-guided subnetwork modularity. In order to do so, we use XLM-R as our multilingual LM. Moreover, to quantify language specialization and cross-lingual interaction, we use a Training Data Attribution method that estimates the degree to which a model’s predictions are influenced by in-language or cross-language training examples. Our results show that language-specialized subnetworks do naturally arise, and that SFT, rather than always increasing modularity, can decrease language specialization of subnetworks in favor of more cross-lingual sharing.
Anthology ID:
2024.findings-naacl.21
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
287–301
Language:
URL:
https://aclanthology.org/2024.findings-naacl.21
DOI:
Bibkey:
Cite (ACL):
Rochelle Choenni, Ekaterina Shutova, and Dan Garrette. 2024. Examining Modularity in Multilingual LMs via Language-Specialized Subnetworks. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 287–301, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Examining Modularity in Multilingual LMs via Language-Specialized Subnetworks (Choenni et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.21.pdf
Copyright:
 2024.findings-naacl.21.copyright.pdf