Hawau Olamide Toyin
2025
Dialectal Coverage And Generalization in Arabic Speech Recognition
Amirbek Djanibekov | Hawau Olamide Toyin | Raghad Alshalan | Abdullah Alatir | Hanan Aldarmaki
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Amirbek Djanibekov | Hawau Olamide Toyin | Raghad Alshalan | Abdullah Alatir | Hanan Aldarmaki
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Developing robust automatic speech recognition (ASR) systems for Arabic requires effective strategies to manage its diversity. Existing ASR systems mainly cover the modern standard Arabic (MSA) variety and few high-resource dialects, but fall short in coverage and generalization across the multitude of spoken variants. Code-switching with English and French is also common in different regions of the Arab world, which challenges the performance of monolingual Arabic models. In this work, we introduce a suite of ASR models optimized to effectively recognize multiple variants of spoken Arabic, including MSA, various dialects, and code-switching. We provide open-source pre-trained models that cover data from 17 Arabic-speaking countries, and fine-tuned MSA and dialectal ASR models that include at least 11 variants, as well as multi-lingual ASR models covering embedded languages in code-switched utterances. We evaluate ASR performance across these spoken varieties and demonstrate both coverage and performance gains compared to prior models.
Where Are We? Evaluating LLM Performance on African Languages
Ife Adebara | Hawau Olamide Toyin | Nahom Tesfu Ghebremichael | AbdelRahim A. Elmadany | Muhammad Abdul-Mageed
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Ife Adebara | Hawau Olamide Toyin | Nahom Tesfu Ghebremichael | AbdelRahim A. Elmadany | Muhammad Abdul-Mageed
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Africa’s rich linguistic heritage remains underrepresented in NLP, largely due to historical policies that favor foreign languages and create significant data inequities. In this paper, we integrate theoretical insights on Africa’s language landscape with an empirical evaluation using Sahara— a comprehensive benchmark curated from large-scale, publicly accessible datasets capturing the continent’s linguistic diversity. By systematically assessing the performance of leading large language models (LLMs) on Sahara, we demonstrate how policy-induced data variations directly impact model effectiveness across African languages. Our findings reveal that while a few languages perform reasonably well, many Indigenous languages remain marginalized due to sparse data. Leveraging these insights, we offer actionable recommendations for policy reforms and inclusive data practices. Overall, our work underscores the urgent need for a dual approach—combining theoretical understanding with empirical evaluation—to foster linguistic diversity in AI for African communities.
Iqra’Eval: A Shared Task on Qur’anic Pronunciation Assessment
Yassine El Kheir | Amit Meghanani | Hawau Olamide Toyin | Nada Almarwani | Omnia Ibrahim | Yousseif Ahmed Elshahawy | Mostafa Shahin | Ahmed Ali
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
Yassine El Kheir | Amit Meghanani | Hawau Olamide Toyin | Nada Almarwani | Omnia Ibrahim | Yousseif Ahmed Elshahawy | Mostafa Shahin | Ahmed Ali
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
NADI 2025: The First Multidialectal Arabic Speech Processing Shared Task
Bashar Talafha | Hawau Olamide Toyin | Peter Sullivan | AbdelRahim A. Elmadany | Abdurrahman Juma | Amirbek Djanibekov | Chiyu Zhang | Hamad Alshehhi | Hanan Aldarmaki | Mustafa Jarrar | Nizar Habash | Muhammad Abdul-Mageed
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
Bashar Talafha | Hawau Olamide Toyin | Peter Sullivan | AbdelRahim A. Elmadany | Abdurrahman Juma | Amirbek Djanibekov | Chiyu Zhang | Hamad Alshehhi | Hanan Aldarmaki | Mustafa Jarrar | Nizar Habash | Muhammad Abdul-Mageed
Proceedings of The Third Arabic Natural Language Processing Conference: Shared Tasks
Exploring the Limitations of Detecting Machine-Generated Text
Jad Doughman | Osama Mohammed Afzal | Hawau Olamide Toyin | Shady Shehata | Preslav Nakov | Zeerak Talat
Proceedings of the 31st International Conference on Computational Linguistics
Jad Doughman | Osama Mohammed Afzal | Hawau Olamide Toyin | Shady Shehata | Preslav Nakov | Zeerak Talat
Proceedings of the 31st International Conference on Computational Linguistics
Recent improvements in the quality of the generations by large language models have spurred research into identifying machine-generated text. Such work often presents high-performing detectors. However, humans and machines can produce text in different styles and domains, yet the the performance impact of such on machine generated text detection systems remains unclear. In this paper, we audit the classification performance for detecting machine-generated text by evaluating on texts with varying writing styles. We find that classifiers are highly sensitive to stylistic changes and differences in text complexity, and in some cases degrade entirely to random classifiers. We further find that detection systems are particularly susceptible to misclassify easy-to-read texts while they have high performance for complex texts, leading to concerns about the reliability of detection systems. We recommend that future work attends to stylistic factors and reading difficulty levels of human-written and machine-generated text.
Voice of a Continent: Mapping Africa’s Speech Technology Frontier
AbdelRahim A. Elmadany | Sang Yun Kwon | Hawau Olamide Toyin | Alcides Alcoba Inciarte | Hanan Aldarmaki | Muhammad Abdul-Mageed
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
AbdelRahim A. Elmadany | Sang Yun Kwon | Hawau Olamide Toyin | Alcides Alcoba Inciarte | Hanan Aldarmaki | Muhammad Abdul-Mageed
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Africa’s rich linguistic diversity remains significantly underrepresented in speech technologies, creating barriers to digital inclusion. To alleviate this challenge, we systematically map the continent’s speech space of datasets and technologies, leading to a new comprehensive benchmark SimbaBench for downstream African speech tasks. Using SimbaBench, we introduce the Simba family of models, achieving state-of-the-art performance across multiple African languages and speech tasks. Our benchmark analysis reveals critical patterns in resource availability, while our model evaluation demonstrates how dataset quality, domain diversity, and language family relationships influence performance across languages. Our work highlights the need for expanded speech technology resources that better reflect Africa’s linguistic diversity and provides a solid foundation for future research and development efforts toward more inclusive speech technologies.
2024
PolyWER: A Holistic Evaluation Framework for Code-Switched Speech Recognition
Karima Kadaoui | Maryam Al Ali | Hawau Olamide Toyin | Ibrahim Mohammed | Hanan Aldarmaki
Findings of the Association for Computational Linguistics: EMNLP 2024
Karima Kadaoui | Maryam Al Ali | Hawau Olamide Toyin | Ibrahim Mohammed | Hanan Aldarmaki
Findings of the Association for Computational Linguistics: EMNLP 2024
Code-switching in speech, particularly between languages that use different scripts, can potentially be correctly transcribed in various forms, including different ways of transliteration of the embedded language into the matrix language script. Traditional methods for measuring accuracy, such as Word Error Rate (WER), are too strict to address this challenge. In this paper, we introduce PolyWER, a proposed framework for evaluating speech recognition systems to handle language-mixing. PolyWER accepts transcriptions of code-mixed segments in different forms, including transliterations and translations. We demonstrate the algorithms use cases through detailed examples, and evaluate it against human judgement. To enable the use of this metric, we appended the annotations of a publicly available Arabic-English code-switched dataset with transliterations and translations of code-mixed speech. We also utilize these additional annotations for fine-tuning ASR models and compare their performance using PolyWER. In addition to our main finding on PolyWER’s effectiveness, our experiments show that alternative annotations could be more effective for fine-tuning monolingual ASR models.
STTATTS: Unified Speech-To-Text And Text-To-Speech Model
Hawau Olamide Toyin | Hao Li | Hanan Aldarmaki
Findings of the Association for Computational Linguistics: EMNLP 2024
Hawau Olamide Toyin | Hao Li | Hanan Aldarmaki
Findings of the Association for Computational Linguistics: EMNLP 2024
Speech recognition and speech synthesis models are typically trained separately, each with its own set of learning objectives, training data, and model parameters, resulting in two distinct large networks. We propose a parameter-efficient approach to learning ASR and TTS jointly via a multi-task learning objective and shared parameters. Our evaluation demonstrates thatthe performance of our multi-task model is comparable to that of individually trained models while significantly savingcomputational and memory costs (~50% reduction in the total number of parameters required for the two tasks combined). We experiment with English as a resource-rich language, and Arabic as a relatively low-resource language due to shortage of TTS data. Our models are trained with publicly available data, and both the training code and model checkpoints are openly available for further research.
2023
ArTST: Arabic Text and Speech Transformer
Hawau Olamide Toyin | Amirbek Djanibekov | Ajinkya Kulkarni | Hanan Aldarmaki
Proceedings of ArabicNLP 2023
Hawau Olamide Toyin | Amirbek Djanibekov | Ajinkya Kulkarni | Hanan Aldarmaki
Proceedings of ArabicNLP 2023
We present ArTST, a pre-trained Arabic text and speech transformer for supporting open-source speech technologies for the Arabic language. The model architecture follows the unified-modal framework, SpeechT5, that was recently released for English, and is focused on Modern Standard Arabic (MSA), with plans to extend the model for dialectal and code-switched Arabic in future editions. We pre-trained the model from scratch on MSA speech and text data, and fine-tuned it for the following tasks: Automatic Speech Recognition (ASR), Text-To-Speech synthesis (TTS), and spoken dialect identification. In our experiments comparing ArTST with SpeechT5, as well as with previously reported results in these tasks, ArTST performs on a par with or exceeding the current state-of-the-art in all three tasks. Moreover, we find that our pre-training is conducive for generalization, which is particularly evident in the low-resource TTS task. The pre-trained model as well as the fine-tuned ASR and TTS models are released for research use.
Search
Fix author
Co-authors
- Hanan Aldarmaki 6
- Muhammad Abdul-Mageed 3
- Amirbek Djanibekov 3
- AbdelRahim A. Elmadany 3
- Ife Adebara 1
- Abdullah Alatir 1
- Alcides Alcoba Inciarte 1
- Maryam Al Ali 1
- Ahmed Ali 1
- Nada Almarwani 1
- Raghad Alshalan 1
- Hamad Alshehhi 1
- Jad Doughman 1
- Yassine El Kheir 1
- Yousseif Ahmed Elshahawy 1
- Nahom Tesfu Ghebremichael 1
- Nizar Habash 1
- Omnia Ibrahim 1
- Mustafa Jarrar 1
- Abdurrahman Juma 1
- Karima Kadaoui 1
- Ajinkya Kulkarni 1
- Sang Yun Kwon 1
- Hao Li 1
- Amit Meghanani 1
- Ibrahim Mohammed 1
- Osama Mohammed Afzal 1
- Preslav Nakov 1
- Mostafa Shahin 1
- Shady Shehata 1
- Peter Sullivan 1
- Bashar Talafha 1
- Zeerak Talat 1
- Chiyu Zhang 1