Iyanuoluwa Shode
2025
Does Generative AI speak Nigerian-Pidgin?: Issues about Representativeness and Bias for Multilingualism in LLMs
David Ifeoluwa Adelani | A. Seza Doğruöz | Iyanuoluwa Shode | Anuoluwapo Aremu
Findings of the Association for Computational Linguistics: NAACL 2025
David Ifeoluwa Adelani | A. Seza Doğruöz | Iyanuoluwa Shode | Anuoluwapo Aremu
Findings of the Association for Computational Linguistics: NAACL 2025
Nigeria is a multilingual country with 500+ languages. Naija is a Nigerian Pidgin spoken by approximately 120M speakers and it is a mixed language (e.g., English, Portuguese, Yoruba, Hausa and Igbo). Although it has mainly been a spoken language until recently, there are some online platforms (e.g., Wikipedia), publishing in written Naija as well. West African Pidgin English (WAPE) is also spoken in Nigeria and it is used by BBC to broadcast news on the internet to a wider audience not only in Nigeria but also in other West African countries (e.g., Cameroon and Ghana). Through statistical analyses and Machine Translation experiments, our paper shows that these two pidgin varieties do not represent each other (i.e., there are linguistic differences in word order and vocabulary) and Generative AI operates only based on WAPE. In other words, Naija is underrepresented in Generative AI, and it is hard to teach LLMs with few examples. In addition to the statistical analyses, we also provide historical information on both pidgins as well as insights from the interviews conducted with volunteer Wikipedia contributors in Naija.
2024
MEDs for PETs: Multilingual Euphemism Disambiguation for Potentially Euphemistic Terms
Patrick Lee | Alain Chirino Trujillo | Diana Cuevas Plancarte | Olumide Ojo | Xinyi Liu | Iyanuoluwa Shode | Yuan Zhao | Anna Feldman | Jing Peng
Findings of the Association for Computational Linguistics: EACL 2024
Patrick Lee | Alain Chirino Trujillo | Diana Cuevas Plancarte | Olumide Ojo | Xinyi Liu | Iyanuoluwa Shode | Yuan Zhao | Anna Feldman | Jing Peng
Findings of the Association for Computational Linguistics: EACL 2024
Euphemisms are found across the world’s languages, making them a universal linguistic phenomenon. As such, euphemistic data may have useful properties for computational tasks across languages. In this study, we explore this premise by training a multilingual transformer model (XLM-RoBERTa) to disambiguate potentially euphemistic terms (PETs) in multilingual and cross-lingual settings. In line with current trends, we demonstrate that zero-shot learning across languages takes place. We also show cases where multilingual models perform better on the task compared to monolingual models by a statistically significant margin, indicating that multilingual data presents additional opportunities for models to learn about cross-lingual, computational properties of euphemisms. In a follow-up analysis, we focus on universal euphemistic “categories” such as death and bodily functions among others. We test to see whether cross-lingual data of the same domain is more important than within-language data of other domains to further understand the nature of the cross-lingual transfer.
AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages
Jiayi Wang | David Ifeoluwa Adelani | Sweta Agrawal | Marek Masiak | Ricardo Rei | Eleftheria Briakou | Marine Carpuat | Xuanli He | Sofia Bourhim | Andiswa Bukula | Muhidin Mohamed | Temitayo Olatoye | Tosin Adewumi | Hamam Mokayed | Christine Mwase | Wangui Kimotho | Foutse Yuehgoh | Anuoluwapo Aremu | Jessica Ojo | Shamsuddeen Hassan Muhammad | Salomey Osei | Abdul-Hakeem Omotayo | Chiamaka Chukwuneke | Perez Ogayo | Oumaima Hourrane | Salma El Anigri | Lolwethu Ndolela | Thabiso Mangwana | Shafie Abdi Mohamed | Hassan Ayinde | Oluwabusayo Olufunke Awoyomi | Lama Alkhaled | Sana Al-azzawi | Naome A. Etori | Millicent Ochieng | Clemencia Siro | Njoroge Kiragu | Eric Muchiri | Wangari Kimotho | Lyse Naomi Wamba Momo | Daud Abolade | Simbiat Ajao | Iyanuoluwa Shode | Ricky Macharm | Ruqayya Nasir Iro | Saheed S. Abdullahi | Stephen E. Moore | Bernard Opoku | Zainab Akinjobi | Abeeb Afolabi | Nnaemeka Obiefuna | Onyekachi Raphael Ogbu | Sam Ochieng’ | Verrah Akinyi Otiende | Chinedu Emmanuel Mbonu | Sakayo Toadoum Sari | Yao Lu | Pontus Stenetorp
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Jiayi Wang | David Ifeoluwa Adelani | Sweta Agrawal | Marek Masiak | Ricardo Rei | Eleftheria Briakou | Marine Carpuat | Xuanli He | Sofia Bourhim | Andiswa Bukula | Muhidin Mohamed | Temitayo Olatoye | Tosin Adewumi | Hamam Mokayed | Christine Mwase | Wangui Kimotho | Foutse Yuehgoh | Anuoluwapo Aremu | Jessica Ojo | Shamsuddeen Hassan Muhammad | Salomey Osei | Abdul-Hakeem Omotayo | Chiamaka Chukwuneke | Perez Ogayo | Oumaima Hourrane | Salma El Anigri | Lolwethu Ndolela | Thabiso Mangwana | Shafie Abdi Mohamed | Hassan Ayinde | Oluwabusayo Olufunke Awoyomi | Lama Alkhaled | Sana Al-azzawi | Naome A. Etori | Millicent Ochieng | Clemencia Siro | Njoroge Kiragu | Eric Muchiri | Wangari Kimotho | Lyse Naomi Wamba Momo | Daud Abolade | Simbiat Ajao | Iyanuoluwa Shode | Ricky Macharm | Ruqayya Nasir Iro | Saheed S. Abdullahi | Stephen E. Moore | Bernard Opoku | Zainab Akinjobi | Abeeb Afolabi | Nnaemeka Obiefuna | Onyekachi Raphael Ogbu | Sam Ochieng’ | Verrah Akinyi Otiende | Chinedu Emmanuel Mbonu | Sakayo Toadoum Sari | Yao Lu | Pontus Stenetorp
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Despite the recent progress on scaling multilingual machine translation (MT) to several under-resourced African languages, accurately measuring this progress remains challenging, since evaluation is often performed on n-gram matching metrics such as BLEU, which typically show a weaker correlation with human judgments. Learned metrics such as COMET have higher correlation; however, the lack of evaluation data with human ratings for under-resourced languages, complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and limited language coverage of multilingual encoders have hampered their applicability to African languages. In this paper, we address these challenges by creating high-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. Furthermore, we develop AfriCOMET: COMET evaluation metrics for African languages by leveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLM-R) to create the state-of-the-art MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441).
2023
NollySenti: Leveraging Transfer Learning and Machine Translation for Nigerian Movie Sentiment Classification
Iyanuoluwa Shode | David Ifeoluwa Adelani | Jing Peng | Anna Feldman
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Iyanuoluwa Shode | David Ifeoluwa Adelani | Jing Peng | Anna Feldman
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Africa has over 2000 indigenous languages but they are under-represented in NLP research due to lack of datasets. In recent years, there have been progress in developing labelled corpora for African languages. However, they are often available in a single domain and may not generalize to other domains. In this paper, we focus on the task of sentiment classification for cross-domain adaptation. We create a new dataset, Nollywood movie reviews for five languages widely spoken in Nigeria (English, Hausa, Igbo, Nigerian Pidgin, and Yoruba). We provide an extensive empirical evaluation using classical machine learning methods and pre-trained language models. By leveraging transfer learning, we compare the performance of cross-domain adaptation from Twitter domain, and cross-lingual adaptation from English language. Our evaluation shows that transfer from English in the same target domain leads to more than 5% improvement in accuracy compared to transfer from Twitter in the same language. To further mitigate the domain difference, we leverage machine translation from English to other Nigerian languages, which leads to a further improvement of 7% over cross-lingual evaluation. While machine translation to low-resource languages are often of low quality, our analysis shows that sentiment related words are often preserved.
AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages
Odunayo Ogundepo | Tajuddeen R. Gwadabe | Clara E. Rivera | Jonathan H. Clark | Sebastian Ruder | David Ifeoluwa Adelani | Bonaventure F. P. Dossou | Abdou Aziz Diop | Claytone Sikasote | Gilles Hacheme | Happy Buzaaba | Ignatius Ezeani | Rooweither Mabuya | Salomey Osei | Chris Emezue | Albert Njoroge Kahira | Shamsuddeen Hassan Muhammad | Akintunde Oladipo | Abraham Toluwase Owodunni | Atnafu Lambebo Tonja | Iyanuoluwa Shode | Akari Asai | Tunde Oluwaseyi Ajayi | Clemencia Siro | Steven Arthur | Mofetoluwa Adeyemi | Orevaoghene Ahia | Anuoluwapo Aremu | Oyinkansola Awosan | Chiamaka Chukwuneke | Bernard Opoku | Awokoya Ayodele | Verrah Otiende | Christine Mwase | Boyd Sinkala | Andre Niyongabo Rubungo | Daniel A. Ajisafe | Emeka Felix Onwuegbuzia | Habib Mbow | Emile Niyomutabazi | Eunice Mukonde | Falalu Ibrahim Lawan | Ibrahim Said Ahmad | Jesujoba O. Alabi | Martin Namukombo | Mbonu Chinedu | Mofya Phiri | Neo Putini | Ndumiso Mngoma | Priscilla A. Amouk | Ruqayya Nasir Iro | Sonia Adhiambo
Findings of the Association for Computational Linguistics: EMNLP 2023
Odunayo Ogundepo | Tajuddeen R. Gwadabe | Clara E. Rivera | Jonathan H. Clark | Sebastian Ruder | David Ifeoluwa Adelani | Bonaventure F. P. Dossou | Abdou Aziz Diop | Claytone Sikasote | Gilles Hacheme | Happy Buzaaba | Ignatius Ezeani | Rooweither Mabuya | Salomey Osei | Chris Emezue | Albert Njoroge Kahira | Shamsuddeen Hassan Muhammad | Akintunde Oladipo | Abraham Toluwase Owodunni | Atnafu Lambebo Tonja | Iyanuoluwa Shode | Akari Asai | Tunde Oluwaseyi Ajayi | Clemencia Siro | Steven Arthur | Mofetoluwa Adeyemi | Orevaoghene Ahia | Anuoluwapo Aremu | Oyinkansola Awosan | Chiamaka Chukwuneke | Bernard Opoku | Awokoya Ayodele | Verrah Otiende | Christine Mwase | Boyd Sinkala | Andre Niyongabo Rubungo | Daniel A. Ajisafe | Emeka Felix Onwuegbuzia | Habib Mbow | Emile Niyomutabazi | Eunice Mukonde | Falalu Ibrahim Lawan | Ibrahim Said Ahmad | Jesujoba O. Alabi | Martin Namukombo | Mbonu Chinedu | Mofya Phiri | Neo Putini | Ndumiso Mngoma | Priscilla A. Amouk | Ruqayya Nasir Iro | Sonia Adhiambo
Findings of the Association for Computational Linguistics: EMNLP 2023
African languages have far less in-language content available digitally, making it challenging for question answering systems to satisfy the information needs of users. Cross-lingual open-retrieval question answering (XOR QA) systems – those that retrieve answer content from other languages while serving people in their native language—offer a means of filling this gap. To this end, we create Our Dataset, the first cross-lingual QA dataset with a focus on African languages. Our Dataset includes 12,000+ XOR QA examples across 10 African languages. While previous datasets have focused primarily on languages where cross-lingual QA augments coverage from the target language, Our Dataset focuses on languages where cross-lingual answer content is the only high-coverage source of answer content. Because of this, we argue that African languages are one of the most important and realistic use cases for XOR QA. Our experiments demonstrate the poor performance of automatic translation and multilingual retrieval methods. Overall, Our Dataset proves challenging for state-of-the-art QA models. We hope that the dataset enables the development of more equitable QA technology.
MasakhaNEWS: News Topic Classification for African languages
David Ifeoluwa Adelani | Marek Masiak | Israel Abebe Azime | Jesujoba Alabi | Atnafu Lambebo Tonja | Christine Mwase | Odunayo Ogundepo | Bonaventure F. P. Dossou | Akintunde Oladipo | Doreen Nixdorf | Chris Chinenye Emezue | Sana Al-azzawi | Blessing Sibanda | Davis David | Lolwethu Ndolela | Jonathan Mukiibi | Tunde Ajayi | Tatiana Moteu | Brian Odhiambo | Abraham Owodunni | Nnaemeka Obiefuna | Muhidin Mohamed | Shamsuddeen Hassan Muhammad | Teshome Mulugeta Ababu | Saheed Abdullahi Salahudeen | Mesay Gemeda Yigezu | Tajuddeen Gwadabe | Idris Abdulmumin | Mahlet Taye | Oluwabusayo Awoyomi | Iyanuoluwa Shode | Tolulope Adelani | Habiba Abdulganiyu | Abdul-Hakeem Omotayo | Adetola Adeeko | Abeeb Afolabi | Anuoluwapo Aremu | Olanrewaju Samuel | Clemencia Siro | Wangari Kimotho | Onyekachi Ogbu | Chinedu Mbonu | Chiamaka Chukwuneke | Samuel Fanijo | Jessica Ojo | Oyinkansola Awosan | Tadesse Kebede | Toadoum Sari Sakayo | Pamela Nyatsine | Freedmore Sidume | Oreen Yousuf | Mardiyyah Oduwole | Kanda Tshinu | Ussen Kimanuka | Thina Diko | Siyanda Nxakama | Sinodos Nigusse | Abdulmejid Johar | Shafie Mohamed | Fuad Mire Hassan | Moges Ahmed Mehamed | Evrard Ngabire | Jules Jules | Ivan Ssenkungu | Pontus Stenetorp
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
David Ifeoluwa Adelani | Marek Masiak | Israel Abebe Azime | Jesujoba Alabi | Atnafu Lambebo Tonja | Christine Mwase | Odunayo Ogundepo | Bonaventure F. P. Dossou | Akintunde Oladipo | Doreen Nixdorf | Chris Chinenye Emezue | Sana Al-azzawi | Blessing Sibanda | Davis David | Lolwethu Ndolela | Jonathan Mukiibi | Tunde Ajayi | Tatiana Moteu | Brian Odhiambo | Abraham Owodunni | Nnaemeka Obiefuna | Muhidin Mohamed | Shamsuddeen Hassan Muhammad | Teshome Mulugeta Ababu | Saheed Abdullahi Salahudeen | Mesay Gemeda Yigezu | Tajuddeen Gwadabe | Idris Abdulmumin | Mahlet Taye | Oluwabusayo Awoyomi | Iyanuoluwa Shode | Tolulope Adelani | Habiba Abdulganiyu | Abdul-Hakeem Omotayo | Adetola Adeeko | Abeeb Afolabi | Anuoluwapo Aremu | Olanrewaju Samuel | Clemencia Siro | Wangari Kimotho | Onyekachi Ogbu | Chinedu Mbonu | Chiamaka Chukwuneke | Samuel Fanijo | Jessica Ojo | Oyinkansola Awosan | Tadesse Kebede | Toadoum Sari Sakayo | Pamela Nyatsine | Freedmore Sidume | Oreen Yousuf | Mardiyyah Oduwole | Kanda Tshinu | Ussen Kimanuka | Thina Diko | Siyanda Nxakama | Sinodos Nigusse | Abdulmejid Johar | Shafie Mohamed | Fuad Mire Hassan | Moges Ahmed Mehamed | Evrard Ngabire | Jules Jules | Ivan Ssenkungu | Pontus Stenetorp
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Masakhane-Afrisenti at SemEval-2023 Task 12: Sentiment Analysis using Afro-centric Language Models and Adapters for Low-resource African Languages
Israel Abebe Azime | Sana Sabah Al-Azzawi | Atnafu Lambebo Tonja | Iyanuoluwa Shode | Jesujoba Alabi | Ayodele Awokoya | Mardiyyah Oduwole | Tosin Adewumi | Samuel Fanijo | Awosan Oyinkansola | Oreen Yousuf
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
Israel Abebe Azime | Sana Sabah Al-Azzawi | Atnafu Lambebo Tonja | Iyanuoluwa Shode | Jesujoba Alabi | Ayodele Awokoya | Mardiyyah Oduwole | Tosin Adewumi | Samuel Fanijo | Awosan Oyinkansola | Oreen Yousuf
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
Detecting harmful content on social media plat-forms is crucial in preventing the negative ef-fects these posts can have on social media users. This paper presents our methodology for tack-ling task 10 from SemEval23, which focuseson detecting and classifying online sexism insocial media posts. We constructed our solu-tion using an ensemble of transformer-basedmodels (that have been fine-tuned; BERTweet,RoBERTa, and DeBERTa). To alleviate the var-ious issues caused by the class imbalance inthe dataset provided and improve the general-ization of our model, our framework employsdata augmentation and semi-supervised learn-ing. Specifically, we use back-translation fordata augmentation in two scenarios: augment-ing the underrepresented class and augment-ing all classes. In this study, we analyze theimpact of these different strategies on the sys-tem’s overall performance and determine whichtechnique is the most effective. Extensive ex-periments demonstrate the efficacy of our ap-proach. For sub-task A, the system achievedan F1-score of 0.8613. The source code to re-produce the proposed solutions is available onGithub
FEED PETs: Further Experimentation and Expansion on the Disambiguation of Potentially Euphemistic Terms
Patrick Lee | Iyanuoluwa Shode | Alain Trujillo | Yuan Zhao | Olumide Ojo | Diana Plancarte | Anna Feldman | Jing Peng
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)
Patrick Lee | Iyanuoluwa Shode | Alain Trujillo | Yuan Zhao | Olumide Ojo | Diana Plancarte | Anna Feldman | Jing Peng
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)
Transformers have been shown to work well for the task of English euphemism disambiguation, in which a potentially euphemistic term (PET) is classified as euphemistic or non-euphemistic in a particular context. In this study, we expand on the task in two ways. First, we annotate PETs for vagueness, a linguistic property associated with euphemisms, and find that transformers are generally better at classifying vague PETs, suggesting linguistic differences in the data that impact performance. Second, we present novel euphemism corpora in three different languages: Yoruba, Spanish, and Mandarin Chinese. We perform euphemism disambiguation experiments in each language using multilingual transformer models mBERT and XLM-RoBERTa, establishing preliminary results from which to launch future work.
2022
AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages
Bonaventure F. P. Dossou | Atnafu Lambebo Tonja | Oreen Yousuf | Salomey Osei | Abigail Oppong | Iyanuoluwa Shode | Oluwabusayo Olufunke Awoyomi | Chris Emezue
Proceedings of the Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)
Bonaventure F. P. Dossou | Atnafu Lambebo Tonja | Oreen Yousuf | Salomey Osei | Abigail Oppong | Iyanuoluwa Shode | Oluwabusayo Olufunke Awoyomi | Chris Emezue
Proceedings of the Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)
In recent years, multilingual pre-trained language models have gained prominence due to their remarkable performance on numerous downstream Natural Language Processing tasks (NLP). However, pre-training these large multilingual language models requires a lot of training data, which is not available for African Languages. Active learning is a semi-supervised learning algorithm, in which a model consistently and dynamically learns to identify the most beneficial samples to train itself on, in order to achieve better optimization and performance on downstream tasks. Furthermore, active learning effectively and practically addresses real-world data scarcity. Despite all its benefits, active learning, in the context of NLP and especially multilingual language models pretraining, has received little consideration. In this paper, we present AfroLM, a multilingual language model pretrained from scratch on 23 African languages (the largest effort to date) using our novel self-active learning framework. Pretrained on a dataset significantly (14x) smaller than existing baselines, AfroLM outperforms many multilingual pretrained language models (AfriBERTa, XLMR-base, mBERT) on various NLP downstream tasks (NER, text classification, and sentiment analysis). Additional out-of-domain sentiment analysis experiments show that AfroLM is able to generalize well across various domains. We release the code source, and our datasets used in our framework at https://github.com/bonaventuredossou/MLM_AL.
Search
Fix author
Co-authors
- David Ifeoluwa Adelani 5
- Anuoluwapo Aremu 4
- Atnafu Lambebo Tonja 4
- Jesujoba Alabi 3
- Chiamaka Chukwuneke 3
- Bonaventure F. P. Dossou 3
- Chris Chinenye Emezue 3
- Anna Feldman 3
- Shamsuddeen Hassan Muhammad 3
- Christine Mwase 3
- Salomey Osei 3
- Jing Peng 3
- Clemencia Siro 3
- Oreen Yousuf 3
- Tosin Adewumi 2
- Abeeb Afolabi 2
- Sana Al-Azzawi 2
- Oyinkansola Awosan 2
- Oluwabusayo Olufunke Awoyomi 2
- Israel Abebe Azime 2
- Samuel Fanijo 2
- Ruqayya Nasir Iro 2
- Wangari Kimotho 2
- Patrick Lee 2
- Marek Masiak 2
- Muhidin Mohamed 2
- Lolwethu Ndolela 2
- Nnaemeka Obiefuna 2
- Mardiyyah Oduwole 2
- Odunayo Ogundepo 2
- Jessica Ojo 2
- Olumide Ojo 2
- Akintunde Oladipo 2
- Abdul-Hakeem Omotayo 2
- Bernard Opoku 2
- Abraham Toluwase Owodunni 2
- Pontus Stenetorp 2
- Yuan Zhao 2
- Teshome Mulugeta Ababu 1
- Habiba Abdulganiyu 1
- Saheed S. Abdullahi 1
- Idris Abdulmumin 1
- Daud Abolade 1
- Adetola Adeeko 1
- Tolulope Adelani 1
- Mofetoluwa Adeyemi 1
- Sonia Adhiambo 1
- Sweta Agrawal 1
- Orevaoghene Ahia 1
- Ibrahim Said Ahmad 1
- Simbiat Ajao 1
- Tunde Oluwaseyi Ajayi 1
- Tunde Ajayi 1
- Daniel A. Ajisafe 1
- Zainab Akinjobi 1
- Sana Sabah Al-Azzawi 1
- Lama Alkhaled 1
- Priscilla A. Amouk 1
- Steven Arthur 1
- Akari Asai 1
- Ayodele Awokoya 1
- Oluwabusayo Awoyomi 1
- Hassan Ayinde 1
- Awokoya Ayodele 1
- Sofia Bourhim 1
- Eleftheria Briakou 1
- Andiswa Bukula 1
- Happy Buzaaba 1
- Marine Carpuat 1
- Mbonu Chinedu 1
- Alain Chirino Trujillo 1
- Jonathan H. Clark 1
- Diana Cuevas Plancarte 1
- Davis David 1
- Thina Diko 1
- Abdou Aziz Diop 1
- A. Seza Doğruöz 1
- Salma El Anigri 1
- Naome A. Etori 1
- Ignatius Ezeani 1
- Tajuddeen R. Gwadabe 1
- Tajuddeen Gwadabe 1
- Gilles Hacheme 1
- Fuad Mire Hassan 1
- Xuanli He 1
- Oumaima Hourrane 1
- Abdulmejid Johar 1
- Jules Jules 1
- Albert Njoroge Kahira 1
- Tadesse Kebede 1
- Ussen Kimanuka 1
- Wangui Kimotho 1
- Njoroge Kiragu 1
- Falalu Ibrahim Lawan 1
- Xinyi Liu 1
- Yao Lu 1
- Rooweither Mabuya 1
- Ricky Macharm 1
- Thabiso Mangwana 1
- Chinedu Mbonu 1
- Chinedu Emmanuel Mbonu 1
- Habib Mbow 1
- Moges Ahmed Mehamed 1
- Ndumiso Mngoma 1
- Shafie Mohamed 1
- Shafie Abdi Mohamed 1
- Hamam Mokayed 1
- Stephen E. Moore 1
- Tatiana Moteu 1
- Eric Muchiri 1
- Jonathan Mukiibi 1
- Eunice Mukonde 1
- Martin Namukombo 1
- Evrard Ngabire 1
- Sinodos Nigusse 1
- Doreen Nixdorf 1
- Emile Niyomutabazi 1
- Siyanda Nxakama 1
- Pamela Nyatsine 1
- Millicent Ochieng 1
- Sam Ochieng’ 1
- Brian Odhiambo 1
- Perez Ogayo 1
- Onyekachi Ogbu 1
- Onyekachi Raphael Ogbu 1
- Temitayo Olatoye 1
- Emeka Felix Onwuegbuzia 1
- Abigail Oppong 1
- Verrah Otiende 1
- Verrah Akinyi Otiende 1
- Awosan Oyinkansola 1
- Mofya Phiri 1
- Diana Plancarte 1
- Neo Putini 1
- Ricardo Rei 1
- Clara E. Rivera 1
- Andre Niyongabo Rubungo 1
- Sebastian Ruder 1
- Toadoum Sari Sakayo 1
- Saheed Abdullahi Salahudeen 1
- Olanrewaju Samuel 1
- Blessing Kudzaishe Sibanda 1
- Freedmore Sidume 1
- Claytone Sikasote 1
- Boyd Sinkala 1
- Ivan Ssenkungu 1
- Mahlet Taye 1
- Sakayo Toadoum Sari 1
- Alain Trujillo 1
- Kanda Tshinu 1
- Lyse Naomi Wamba Momo 1
- Jiayi Wang 1
- Mesay Gemeda Yigezu 1
- Foutse Yuehgoh 1