Kosei Uemura
2026
MERLIN: Multi-Stage Curriculum Alignment for Multilingual Encoder-LLM Integration in Cross-Lingual Reasoning
Kosei Uemura | David Guzmán | Quang Phuoc Nguyen | Jesujoba Oluwadara Alabi | En-Shiun Annie Lee | David Ifeoluwa Adelani
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Kosei Uemura | David Guzmán | Quang Phuoc Nguyen | Jesujoba Oluwadara Alabi | En-Shiun Annie Lee | David Ifeoluwa Adelani
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) excel in English but still struggle with complex reasoning in many low-resource languages (LRLs). Existing methods align LLMs with multilingual encoders, such as LangBridge and MindMerger, raising the accuracy for mid and high-resource languages, yet large performance gap remains for LRLs. We present MERLIN, a model-stacking framework that iteratively refines in 2-stages based on a curriculum strategy (from general to specific where general is bilingual bitext and specific is task-specific data) and adapts only a small set of DoRA weights. On the AfriMGSM benchmark MERLIN improves exact-match accuracy by +12.9 pp over MindMerger and outperforms GPT-4o-mini by 15.2 pp. It also yields consistent gains on MGSM and MSVAMP (+0.9 and +2.8 pp), demonstrating effectiveness across both low and high-resource settings.
AfriMTEB and AfriE5: Benchmarking and Adapting Text Embedding Models for African Languages
Kosei Uemura | Miaoran Zhang | David Ifeoluwa Adelani
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Kosei Uemura | Miaoran Zhang | David Ifeoluwa Adelani
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Text embeddings are an essential building component of several NLP tasks such as retrieval-augmented generation which is crucial for preventing hallucinations in LLMs. Despite the recent release of massively multilingual MTEB (MMTEB), African languages remain underrepresented, with existing tasks often repurposed from translation benchmarks such as FLORES clustering or SIB-200. In this paper, we introduce AfriMTEB—a regional expansion of MMTEB covering 59 languages, 14 tasks, and 38 datasets, including six newly added datasets. Unlike many MMTEB datasets that include fewer than five languages, the new additions span 14 to 56 African languages and introduce entirely new tasks, such as hate speech detection, intent detection, and emotion classification, which were not previously covered. Complementing this, we present AfriE5, an adaptation of the instruction-tuned mE5 model to African languages through cross-lingual contrastive distillation. Our evaluation shows that AfriE5 achieves state-of-the-art performance, outperforming strong baselines such as Gemini-Embeddings and mE5.
2024
AfriInstruct: Instruction Tuning of African Languages for Diverse Tasks
Kosei Uemura | Mahe Chen | Alex Pejovic | Chika Maduabuchi | Yifei Sun | En-Shiun Annie Lee
Findings of the Association for Computational Linguistics: EMNLP 2024
Kosei Uemura | Mahe Chen | Alex Pejovic | Chika Maduabuchi | Yifei Sun | En-Shiun Annie Lee
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language models (LLMs) for African languages perform worse compared to their performance in high-resource languages. To address this issue, we introduce AfriInstruct, which specializes in instruction-tuning of multiple African languages covering various tasks. We trained the LLaMa-2-7B using continual pretraining and instruction fine-tuning, which demonstrates superior performance across multiple tasks. Our mixed task evaluation shows that our model outperforms GPT-3.5-Turbo and other baseline models of similar size. Our contributions fill a critical gap of LLM performance between high-resource and African languages.
Empowering the Future with Multilinguality and Language Diversity
En-Shiun Annie Lee | Kosei Uemura | Syed Mekael Wasti | Mason Shipton
Proceedings of the Sixth Workshop on Teaching NLP
En-Shiun Annie Lee | Kosei Uemura | Syed Mekael Wasti | Mason Shipton
Proceedings of the Sixth Workshop on Teaching NLP
The rapid advancements and the widespread transformation of Large Language Models, have made it necessary to incorporate these cutting-edge techniques into the educational curricula of Natural Language Processing (NLP) with limited computing resources. This paper presents an applied NLP course designed for upper-year computer science undergraduate students on state-of-the-art techniques with an emphasis on multilinguality and language diversity. We hope to empower learners to advance their language community while preparing for industry.