2024
pdf
bib
abs
Gender Bias Evaluation in Machine Translation for Amharic, Tigrigna, and Afaan Oromoo
Walelign Sewunetie
|
Atnafu Tonja
|
Tadesse Belay
|
Hellina Hailu Nigatu
|
Gashaw Gebremeskel
|
Zewdie Mossie
|
Hussien Seid
|
Seid Yimam
Proceedings of the 2nd International Workshop on Gender-Inclusive Translation Technologies
While Machine Translation (MT) research has progressed over the years, translation systems still suffer from biases, including gender bias. While an active line of research studies the existence and mitigation strategies of gender bias in machine translation systems, there is limited research exploring this phenomenon for low-resource languages. The limited availability of linguistic and computational resources confounded with the lack of benchmark datasets makes studying bias for low-resourced languages that much more difficult. In this paper, we construct benchmark datasets to evaluate gender bias in machine translation for three low-resource languages: Afaan Oromoo (Orm), Amharic (Amh), and Tigrinya (Tir). Building on prior work, we collected 2400 gender-balanced sentences parallelly translated into the three languages. From human evaluations of the dataset we collected, we found that about 93% of Afaan Oromoo, 80% of Tigrinya, and 72% of Amharic sentences exhibited gender bias. In addition to providing benchmarks for improving gender bias mitigation research in the three languages, we hope the careful documentation of our work will help other low-resourced language researchers extend our approach to their languages.
pdf
bib
abs
NLP Progress in Indigenous Latin American Languages
Atnafu Tonja
|
Fazlourrahman Balouchzahi
|
Sabur Butt
|
Olga Kolesnikova
|
Hector Ceballos
|
Alexander Gelbukh
|
Thamar Solorio
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
The paper focuses on the marginalization of indigenous language communities in the face of rapid technological advancements. We highlight the cultural richness of these languages and the risk they face of being overlooked in the realm of Natural Language Processing (NLP). We aim to bridge the gap between these communities and researchers, emphasizing the need for inclusive technological advancements that respect indigenous community perspectives. We show the NLP progress of indigenous Latin American languages and the survey that covers the status of indigenous languages in Latin America, their representation in NLP, and the challenges and innovations required for their preservation and development. The paper contributes to the current literature in understanding the need and progress of NLP for indigenous communities of Latin America, specifically low-resource and indigenous communities in general.
2023
pdf
bib
abs
The Less the Merrier? Investigating Language Representation in Multilingual Models
Hellina Nigatu
|
Atnafu Tonja
|
Jugal Kalita
Findings of the Association for Computational Linguistics: EMNLP 2023
Multilingual Language Models offer a way to incorporate multiple languages in one model and utilize cross-language transfer learning to improve performance for different Natural Language Processing (NLP) tasks. Despite progress in multilingual models, not all languages are supported as well, particularly in low-resource settings. In this work, we investigate the linguistic representation of different languages in multilingual models. We start by asking the question which languages are supported in popular multilingual models and which languages are left behind. Then, for included languages, we look at models’ learned representations based on language family and dialect and try to understand how models’ learned representations for (1) seen and (2) unseen languages vary across different language groups. In addition, we test and analyze performance on downstream tasks such as text generation and Named Entity Recognition. We observe from our experiments that community-centered models—models that focus on languages of a given family or geographical location and are built by communities who speak them—perform better at distinguishing between languages in the same family for low-resource languages. Our paper contributes to the literature in understanding multilingual models and their shortcomings and offers insights on potential ways to improve them.
pdf
bib
abs
Cross-lingual Open-Retrieval Question Answering for African Languages
Odunayo Ogundepo
|
Tajuddeen Gwadabe
|
Clara Rivera
|
Jonathan Clark
|
Sebastian Ruder
|
David Adelani
|
Bonaventure Dossou
|
Abdou Diop
|
Claytone Sikasote
|
Gilles Hacheme
|
Happy Buzaaba
|
Ignatius Ezeani
|
Rooweither Mabuya
|
Salomey Osei
|
Chris Emezue
|
Albert Kahira
|
Shamsuddeen Muhammad
|
Akintunde Oladipo
|
Abraham Owodunni
|
Atnafu Tonja
|
Iyanuoluwa Shode
|
Akari Asai
|
Anuoluwapo Aremu
|
Ayodele Awokoya
|
Bernard Opoku
|
Chiamaka Chukwuneke
|
Christine Mwase
|
Clemencia Siro
|
Stephen Arthur
|
Tunde Ajayi
|
Verrah Otiende
|
Andre Rubungo
|
Boyd Sinkala
|
Daniel Ajisafe
|
Emeka Onwuegbuzia
|
Falalu Lawan
|
Ibrahim Ahmad
|
Jesujoba Alabi
|
Chinedu Mbonu
|
Mofetoluwa Adeyemi
|
Mofya Phiri
|
Orevaoghene Ahia
|
Ruqayya Iro
|
Sonia Adhiambo
Findings of the Association for Computational Linguistics: EMNLP 2023
African languages have far less in-language content available digitally, making it challenging for question answering systems to satisfy the information needs of users. Cross-lingual open-retrieval question answering (XOR QA) systems – those that retrieve answer content from other languages while serving people in their native language—offer a means of filling this gap. To this end, we create Our Dataset, the first cross-lingual QA dataset with a focus on African languages. Our Dataset includes 12,000+ XOR QA examples across 10 African languages. While previous datasets have focused primarily on languages where cross-lingual QA augments coverage from the target language, Our Dataset focuses on languages where cross-lingual answer content is the only high-coverage source of answer content. Because of this, we argue that African languages are one of the most important and realistic use cases for XOR QA. Our experiments demonstrate the poor performance of automatic translation and multilingual retrieval methods. Overall, Our Dataset proves challenging for state-of-the-art QA models. We hope that the dataset enables the development of more equitable QA technology.