Macton Mgonzo
2026
Learning from Scarcity: Building and Benchmarking Speech Technology for Sukuma.
Macton Mgonzo | Kezia Oketch | Naome A Etori | Winnie Mang'eni | Elizabeth Fabian Nyaki | Michael Samwel Mollel
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Macton Mgonzo | Kezia Oketch | Naome A Etori | Winnie Mang'eni | Elizabeth Fabian Nyaki | Michael Samwel Mollel
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Automatic Speech Recognition (ASR) systems are gaining increasing attention in both academia and industry. Despite having remarkable performance in high-resource languages, their efficacy is less pronounced in low-resource settings. We present the first ASR system for Sukuma, one of the most severely under-resourced Tanzanian languages, and provide an open-source Sukuma speech corpus comprising 7.47 hours of carefully transcribed audio. The data, sourced primarily from Bible readings, was rigorously annotated to ensure phonetic and orthographic consistency, making it the most linguistically reliable resource currently available for the Sukuma language. To establish baselines, we train lightweight ASR and Text-to-Speech (TTS) models that demonstrate the feasibility of building end-to-end speech systems for this underrepresented language. This work addresses the challenges of developing language and communication tools for speakers of less-represented languages, particularly the scarcity of representative datasets and benchmarks, and highlights future research directions for linguistically challenging languages, such as Sukuma. We make our data and code publicly available to facilitate reproducibility and further research.
2025
Beyond Contrastive Learning: Synthetic Data Enables List-wise Training with Multiple Levels of Relevance
Reza Esfandiarpoor | George Zerveas | Ruochen Zhang | Macton Mgonzo | Carsten Eickhoff | Stephen Bach
Findings of the Association for Computational Linguistics: EMNLP 2025
Reza Esfandiarpoor | George Zerveas | Ruochen Zhang | Macton Mgonzo | Carsten Eickhoff | Stephen Bach
Findings of the Association for Computational Linguistics: EMNLP 2025
Although synthetic data has changed various aspects of information retrieval (IR) pipelines, the main training paradigm remains: contrastive learning with binary relevance labels, where one positive document is compared against several negatives using the InfoNCE loss. This objective treats all documents that are not explicitly annotated as relevant on an equally negative footing, regardless of their actual degree of relevance, thus missing subtle nuances useful for ranking. To overcome this limitation, in this work, we forgo real documents and annotations and use large language models to directly generate synthetic documents that answer the MS MARCO queries according to _several different levels of relevance_. We also propose using Wasserstein distance as a more effective loss function for training transformer-based retrievers with graduated relevance labels. Our experiments on MS MARCO and BEIR benchmark show that our proposed approach outperforms conventional training with InfoNCE by a large margin. Without using any real documents, our method significantly improves self-supervised retrievers and is more robust to distribution shift compared to contrastive learning using real data. Our method also successfully integrates existing real data into the synthetic ranking context, further boosting the performance. Overall, we show that generating multi-level ranking contexts is a better approach to synthetic data generation for IR than just generating the standard positive and negative documents.