Aman Kassahun Wassie
Also published as: Aman Kassahun Wassie
2026
AfriNLLB: Efficient Translation Models for African Languages
Yasmin Moslem | Aman Kassahun Wassie | Amanuel Gizachew Abebe
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
Yasmin Moslem | Aman Kassahun Wassie | Amanuel Gizachew Abebe
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
In this work, we present AfriNLLB, a series of lightweight models for efficient translation from and into African languages. AfriNLLB supports 15 language pairs (30 translation directions), including Swahili, Hausa, Yoruba, Amharic, Somali, Zulu, Lingala, Afrikaans, Wolof, and Egyptian Arabic, as well as other African Union official languages such as Arabic (MSA), French, Portuguese, and Spanish. Our training data covers bidirectional translation between English and 13 languages, and between French and two languages (Lingala and Wolof). AfriNLLB models are based on NLLB-200 600M, which we compress using iterative layer pruning and quantization. We fine-tune the pruned models on parallel corpora we curated for African languages, employing knowledge distillation from a larger teacher model. Our work aims at enabling efficient deployment of translation models for African languages in resource-constrained settings. Our evaluation results demonstrate that AfriNLLB models achieve performance comparable to the baseline while being significantly faster. We release two versions of the AfriNLLB models, a Transformers version that allows further fine-tuning and a CTranslate2 version for efficient inference. Moreover, we release all the training data that we used for fine-tuning the baseline and pruned models to facilitate further research.
2025
Bemba Speech Translation: Exploring a Low-Resource African Language
Muhammad Hazim Al Farouq | Aman Kassahun Wassie | Yasmin Moslem
Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
Muhammad Hazim Al Farouq | Aman Kassahun Wassie | Yasmin Moslem
Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
This paper describes our system submission to the International Conference on Spoken Language Translation (IWSLT 2025), low-resource languages track, namely for Bemba-to-English speech translation. We built cascaded speech translation systems based on Whisper and NLLB-200, and employed data augmentation techniques, such as back-translation. We investigate the effect of using synthetic data and discuss our experimental setup.
2024
Walia-LLM: Enhancing Amharic-LLaMA by Integrating Task-Specific and Generative Datasets
Israel Abebe Azime | Atnafu Lambebo Tonja | Tadesse Destaw Belay | Mitiku Yohannes Fuge | Aman Kassahun Wassie | Eyasu Shiferaw Jada | Yonas Chanie | Walelign Tewabe Sewunetie | Seid Muhie Yimam
Findings of the Association for Computational Linguistics: EMNLP 2024
Israel Abebe Azime | Atnafu Lambebo Tonja | Tadesse Destaw Belay | Mitiku Yohannes Fuge | Aman Kassahun Wassie | Eyasu Shiferaw Jada | Yonas Chanie | Walelign Tewabe Sewunetie | Seid Muhie Yimam
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language models (LLMs) have received a lot of attention in natural language processing (NLP) research because of their exceptional performance in understanding and generating human languages. However, low-resource languages are left behind due to the unavailability of resources. In this work, we focus on enhancing the LLaMA-2-Amharic model by integrating task-specific and generative datasets to improve language model performance for Amharic. We compile an Amharic instruction fine-tuning dataset and fine-tuned LLaMA-2-Amharic model. The fine-tuned model shows promising results in different NLP tasks. We also explore the effectiveness of translated instruction datasets compared to the dataset we created. Our dataset creation pipeline, along with instruction datasets, trained models, and evaluation outputs, is made publicly available to encourage research in language-specific models.