Eyasu Jada
2024
Walia-LLM: Enhancing Amharic-LLaMA by Integrating Task-Specific and Generative Datasets
Israel Azime
|
Atnafu Tonja
|
Tadesse Belay
|
Mitiku Yohannes Fuge
|
Aman Wassie
|
Eyasu Jada
|
Yonas Chanie
|
Walelign Sewunetie
|
Seid Yimam
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language models (LLMs) have received a lot of attention in natural language processing (NLP) research because of their exceptional performance in understanding and generating human languages. However, low-resource languages are left behind due to the unavailability of resources. In this work, we focus on enhancing the LLaMA-2-Amharic model by integrating task-specific and generative datasets to improve language model performance for Amharic. We compile an Amharic instruction fine-tuning dataset and fine-tuned LLaMA-2-Amharic model. The fine-tuned model shows promising results in different NLP tasks. We also explore the effectiveness of translated instruction datasets compared to the dataset we created. Our dataset creation pipeline, along with instruction datasets, trained models, and evaluation outputs, is made publicly available to encourage research in language-specific models.
Search
Co-authors
- Israel Azime 1
- Atnafu Tonja 1
- Tadesse Belay 1
- Mitiku Yohannes Fuge 1
- Aman Wassie 1
- show all...