Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer

Hele-Andra Kuulmets, Taido Purason, Agnes Luhtaru, Mark Fishel


Abstract
This paper explores cost-efficient methods to adapt pretrained Large Language Models (LLMs) to new lower-resource languages, with a specific focus on Estonian. Leveraging the Llama 2 model, we investigate the impact of combining cross-lingual instruction-tuning with additional monolingual pretraining. Our results demonstrate that even a relatively small amount of additional monolingual pretraining followed by cross-lingual instruction-tuning significantly enhances results on Estonian. Furthermore, we showcase cross-lingual knowledge transfer from high-quality English instructions to Estonian, resulting in improvements in commonsense reasoning and multi-turn conversation capabilities. Our best model, named Llammas, represents the first open-source instruction-following LLM for Estonian. Additionally, we publish Alpaca-est, the first general task instruction dataset for Estonia. These contributions mark the initial progress in the direction of developing open-source LLMs for Estonian.
Anthology ID:
2024.findings-naacl.210
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3309–3325
Language:
URL:
https://aclanthology.org/2024.findings-naacl.210
DOI:
10.18653/v1/2024.findings-naacl.210
Bibkey:
Cite (ACL):
Hele-Andra Kuulmets, Taido Purason, Agnes Luhtaru, and Mark Fishel. 2024. Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3309–3325, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer (Kuulmets et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.210.pdf