Mardhiyah Sanni
2026
AfriVox: Probing Multilingual and Accent Robustness of Speech LLMs
Busayo Awobade | Mardhiyah Sanni | Tassallah Abdullahi | Chibuzor Okocha | Kelechi Ezema | Devendra Deepak Kayande | Lukman Enegi Ismaila | Tobi Olatunji | Gloria Ashiya Katuka
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Busayo Awobade | Mardhiyah Sanni | Tassallah Abdullahi | Chibuzor Okocha | Kelechi Ezema | Devendra Deepak Kayande | Lukman Enegi Ismaila | Tobi Olatunji | Gloria Ashiya Katuka
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent advances in multimodal and speech-native large language models (LLMs) have delivered impressive speech recognition, translation, understanding, and question-answering capabilities for high-resource languages. However, African languages and non-native French or English accents remain dramatically underrepresented in benchmarks limiting the understanding and applicability of leading LLMs for millions of francophone and anglophone users in low-resource settings. We presents AfriVox, an open-source benchmark (including novel domain-specific and unscripted datasets) across 20 African languages, African-accented French, Arabic, and 100+ African English accents, contrasting leading multimodal speech LLMs with traditional unimodal automatic speech transcription (ASR) and translation (AST) models. Our analysis reveals significant language coverage variation, surprising LLM translation performance gains (e.g. Gemini), robustness concerns with unscripted speech, and substantial performance disparities for "supported" African languages. We profile the strengths, limitations, and language support of each model, and conduct the first targeted fine-tuning of a modern speech LLM (Qwen2.5-Omni) for three Nigerian languages, exceeding SOTA, and achieving up to 54% relative WER reduction and significant BLEU gains, offering practical guidance for implementers seeking to serve local language users.
2025
AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering Benchmark Dataset
Charles Nimo | Tobi Olatunji | Abraham Toluwase Owodunni | Tassallah Abdullahi | Emmanuel Ayodele | Mardhiyah Sanni | Ezinwanne C. Aka | Folafunmi Omofoye | Foutse Yuehgoh | Timothy Faniran | Bonaventure F. P. Dossou | Moshood O. Yekini | Jonas Kemp | Katherine A Heller | Jude Chidubem Omeke | Chidi Asuzu Md | Naome A Etori | Aïmérou Ndiaye | Ifeoma Okoh | Evans Doe Ocansey | Wendy Kinara | Michael L. Best | Irfan Essa | Stephen Edward Moore | Chris Fourie | Mercy Nyamewaa Asiedu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Charles Nimo | Tobi Olatunji | Abraham Toluwase Owodunni | Tassallah Abdullahi | Emmanuel Ayodele | Mardhiyah Sanni | Ezinwanne C. Aka | Folafunmi Omofoye | Foutse Yuehgoh | Timothy Faniran | Bonaventure F. P. Dossou | Moshood O. Yekini | Jonas Kemp | Katherine A Heller | Jude Chidubem Omeke | Chidi Asuzu Md | Naome A Etori | Aïmérou Ndiaye | Ifeoma Okoh | Evans Doe Ocansey | Wendy Kinara | Michael L. Best | Irfan Essa | Stephen Edward Moore | Chris Fourie | Mercy Nyamewaa Asiedu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent advancements in large language model (LLM) performance on medical multiplechoice question (MCQ) benchmarks have stimulated interest from healthcare providers and patients globally. Particularly in low-andmiddle-income countries (LMICs) facing acute physician shortages and lack of specialists, LLMs offer a potentially scalable pathway to enhance healthcare access and reduce costs. However, their effectiveness in the Global South, especially across the African continent, remains to be established. In this work, we introduce AfriMed-QA , the first largescale Pan-African English multi-specialty medical Question-Answering (QA) dataset, 15,000 questions (open and closed-ended) sourced from over 60 medical schools across 16 countries, covering 32 medical specialties. We further evaluate 30 LLMs across multiple axes including correctness and demographic bias. Our findings show significant performance variation across specialties and geographies, MCQ performance clearly lags USMLE (MedQA). We find that biomedical LLMs underperform general models and smaller edge-friendly LLMs struggle to achieve a passing score. Interestingly, human evaluations show a consistent consumer preference for LLM answers and explanations when compared with clinician answers.
AfriSpeech-MultiBench: A Verticalized Multidomain Multicountry Benchmark Suite for African Accented English ASR
Gabrial Zencha Ashungafac | Mardhiyah Sanni | Busayo Awobade | Alex Gichamba | Tobi Olatunji
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Gabrial Zencha Ashungafac | Mardhiyah Sanni | Busayo Awobade | Alex Gichamba | Tobi Olatunji
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Recent advances in speech‐enabled AI, including Google’s NotebookLM and OpenAI’s speech-to-speech API, are driving widespread interest in voice interfaces across sectors such as finance, health, agritech, legal services, and call‐centers in the global north and south. Despite this momentum, there exists no publicly available application-specific model evaluation that caters to Africa’s linguistic diversity. We present AfriSpeech‑MultiBench, the first domain‐specific evaluation suite for over 100 African English accents across 10+ countries and seven application domains: Finance, Legal, Medical, General dialogue, Call Center, Named Entities, and Hallucination Robustness. We benchmark a diverse range of open, closed, unimodal ASR and multimodal LLM-based speech recognition systems using both spontaneous and non-spontaneous speech conversations drawn from various open African accented English speech datasets. Our empirical analysis reveals systematic variation: open‐source ASR excels in spontaneous speech contexts but degrades on noisy, non‐native dialogue; multimodal LLMs are more accent‐robust yet struggle with domain‐specific named entities; proprietary models deliver high accuracy on clean speech but vary significantly by country and domain. Smaller models fine‐tuned on African English achieve competitive accuracy with lower latency, a practical advantage for deployment. By releasing this benchmark, we empower practitioners and researchers to select voice technologies suited to African use‐cases, fostering inclusive voice applications for undeserved communities.
Afrispeech-Dialog: A Benchmark Dataset for Spontaneous English Conversations in Healthcare and Beyond
Mardhiyah Sanni | Tassallah Abdullahi | Devendra Deepak Kayande | Emmanuel Ayodele | Naome A Etori | Michael Samwel Mollel | Moshood O. Yekini | Chibuzor Okocha | Lukman Enegi Ismaila | Folafunmi Omofoye | Boluwatife A. Adewale | Tobi Olatunji
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Mardhiyah Sanni | Tassallah Abdullahi | Devendra Deepak Kayande | Emmanuel Ayodele | Naome A Etori | Michael Samwel Mollel | Moshood O. Yekini | Chibuzor Okocha | Lukman Enegi Ismaila | Folafunmi Omofoye | Boluwatife A. Adewale | Tobi Olatunji
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Speech technologies are transforming interactions across various sectors, from healthcare to call centers and robots, yet their performance on African-accented conversations remains underexplored. We introduce Afrispeech-Dialog, a benchmark dataset of 50 simulated medical and non-medical African-accented English conversations, designed to evaluate automatic speech recognition (ASR) and related technologies. We assess state-of-the-art (SOTA) speaker diarization and ASR systems on long-form, accented speech, comparing their performance with native accents and discover a 10%+ performance degradation. Additionally, we explore medical conversation summarization capabilities of large language models (LLMs) to demonstrate the impact of ASR errors on downstream medical summaries, providing insights into the challenges and opportunities for speech technologies in the Global South. Our work highlights the need for more inclusive datasets to advance conversational AI in low-resource settings.
2024
SparseFit: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations
Jesus Solano | Mardhiyah Sanni | Oana-Maria Camburu | Pasquale Minervini
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Jesus Solano | Mardhiyah Sanni | Oana-Maria Camburu | Pasquale Minervini
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Models that generate natural language explanations (NLEs) for their predictions have recently gained increasing interest. However, this approach usually demands large datasets of human-written NLEs for the ground-truth answers at training time, which can be expensive and potentially infeasible for some applications. When only a few NLEs are available (a few-shot setup), fine-tuning pre-trained language models (PLMs) in conjunction with prompt-based learning has recently shown promising results. However, PLMs typically have billions of parameters, making full fine-tuning expensive. We propose SparseFit, a sparse few-shot fine-tuning strategy that leverages discrete prompts to jointly generate predictions and NLEs. We experiment with SparseFit on three sizes of the T5 language model and four datasets and compare it against existing state-of-the-art Parameter-Efficient Fine-Tuning (PEFT) techniques. We find that fine-tuning only 6.8% of the model parameters leads to competitive results for both the task performance and the quality of the generated NLEs compared to full fine-tuning of the model and produces better results on average than other PEFT methods in terms of predictive accuracy and NLE quality.
Search
Fix author
Co-authors
- Tobi Olatunji 4
- Tassallah Abdullahi 3
- Busayo Awobade 2
- Emmanuel Ayodele 2
- Naome A. Etori 2
- Lukman Enegi Ismaila 2
- Devendra Deepak Kayande 2
- Chibuzor Okocha 2
- Folafunmi Omofoye 2
- Moshood O. Yekini 2
- Boluwatife A. Adewale 1
- Ezinwanne C. Aka 1
- Gabrial Zencha Ashungafac 1
- Mercy Nyamewaa Asiedu 1
- Michael L. Best 1
- Oana-Maria Camburu 1
- Bonaventure F. P. Dossou 1
- Irfan Essa 1
- Kelechi Ezema 1
- Timothy Faniran 1
- Chris Fourie 1
- Alex Gichamba 1
- Katherine A Heller 1
- Gloria Ashiya Katuka 1
- Jonas Kemp 1
- Wendy Kinara 1
- Chidi Asuzu Md 1
- Pasquale Minervini 1
- Michael Samwel Mollel 1
- Stephen Edward Moore 1
- Aïmérou Ndiaye 1
- Charles Nimo 1
- Evans Doe Ocansey 1
- Ifeoma Okoh 1
- Jude Chidubem Omeke 1
- Abraham Toluwase Owodunni 1
- Jesus Solano 1
- Foutse Yuehgoh 1