Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages

Fabian Schmidt, Philipp Borchert, Ivan Vulić, Goran Glavaš


Abstract
LLMs have become a go-to solution not just for text generation, but also for natural language understanding (NLU) tasks. Acquiring extensive knowledge through language modeling on web-scale corpora, they excel on English NLU, yet struggle to extend their NLU capabilities to underrepresented languages. In contrast, machine translation models (MT) produce excellent multilingual representations, resulting in strong translation performance even for low-resource languages. MT encoders, however, lack the knowledge necessary for comprehensive NLU that LLMs obtain through language modeling training on immense corpora. In this work, we get the best both worlds by integrating MT encoders directly into LLM backbones via sample-efficient self-distillation. The resulting MT-LLMs preserve the inherent multilingual representational alignment from the MT encoder, allowing lower-resource languages to tap into the rich knowledge embedded in English-centric LLMs. Merging the MT encoder and LLM in a single model, we mitigate the propagation of translation errors and inference overhead of MT decoding inherent to discrete translation-based cross-lingual transfer (e.g., translate-test). Evaluation spanning three prominent NLU tasks and 127 predominantly low-resource languages renders MT-LLMs highly effective in cross-lingual transfer. MT-LLMs substantially and consistently outperform translation-test based on the same MT model, showing that we truly unlock multilingual language understanding for LLMs.
Anthology ID:
2024.findings-emnlp.394
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6724–6743
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.394
DOI:
Bibkey:
Cite (ACL):
Fabian Schmidt, Philipp Borchert, Ivan Vulić, and Goran Glavaš. 2024. Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 6724–6743, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages (Schmidt et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.394.pdf
Software:
 2024.findings-emnlp.394.software.tgz
Data:
 2024.findings-emnlp.394.data.tgz