Benjamin Akera
2026
Real-Time Spoken Instruction Following and Translation in Ugandan Languages
Benjamin Akera | Tim Wenjie Hu | Patrick Walukagga | Evelyn Nafula Ouma | Yiga Gilbert | Ernest Tonny Mwebaze | John Quinn
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
Benjamin Akera | Tim Wenjie Hu | Patrick Walukagga | Evelyn Nafula Ouma | Yiga Gilbert | Ernest Tonny Mwebaze | John Quinn
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
Many languages are predominantly spoken rather than written, and to bring the benefits of LLMs to speakers of these languages, it is essential that models cater to the voice modality. The typical approach is to cascade ASR, LLM and TTS models together, though this results in systems with high latency, making them unsuitable for natural, real-time interaction. We describe results on taking the encoder part of a Whisper-based model trained to recognise ten languages common in Uganda, and using the Ultravox architecture to project its output directly to the input embedding space of a text model based on Qwen 3 32B, also trained to have comprehension of those languages. The result is a speech LLM with high accuracy and very low latency. For most spoken prompts, we can begin streaming a text response within as low as 50 ms, and a speech audio response within around one second, making real-time spoken interaction with an LLM possible for the first time in these languages. The model is available open source onHugging Face.
SALT-31: A Machine Translation Benchmark Dataset for 31 Ugandan Languages
Solomon Nsumba | Benjamin Akera | Evelyn Nafula Ouma | Medadi E. Ssentanda | Deo Kawalya | Engineer Bainomugisha | Ernest Tonny Mwebaze | John Quinn
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
Solomon Nsumba | Benjamin Akera | Evelyn Nafula Ouma | Medadi E. Ssentanda | Deo Kawalya | Engineer Bainomugisha | Ernest Tonny Mwebaze | John Quinn
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
We present the SALT-31 benchmark dataset for evaluation of machine translation models covering 31 Ugandan languages. Unlike sentence-level evaluation sets, SALT-31 is constructed from short, scenario-driven mini-dialogues designed to preserve discourse context, pragmatics, and culturally grounded communication patterns common in everyday Ugandan settings. The dataset contains 100 English sentences organized into 20 typical communication scenarios, each represented as a five-sentence mini-sequence. It can therefore be used to evaluate both sentence-level and paragraph level machine translation, and includes nearly every language spoken in a country with high linguistic diversity. It is available at https://huggingface.co/datasets/Sunbird/salt-31
2021
hBERT + BiasCorp - Fighting Racism on the Web
Olawale Onabola | Zhuang Ma | Xie Yang | Benjamin Akera | Ibraheem Abdulrahman | Jia Xue | Dianbo Liu | Yoshua Bengio
Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion
Olawale Onabola | Zhuang Ma | Xie Yang | Benjamin Akera | Ibraheem Abdulrahman | Jia Xue | Dianbo Liu | Yoshua Bengio
Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion
Subtle and overt racism is still present both in physical and online communities today and has impacted many lives in different segments of the society. In this short piece of work, we present how we’re tackling this societal issue with Natural Language Processing. We are releasing BiasCorp, a dataset containing 139,090 comments and news segment from three specific sources - Fox News, BreitbartNews and YouTube. The first batch (45,000 manually annotated) is ready for publication. We are currently in the final phase of manually labeling the remaining dataset using Amazon Mechanical Turk. BERT has been used widely in several downstream tasks. In this work, we present hBERT, where we modify certain layers of the pretrained BERT model with the new Hopfield Layer. hBert generalizes well across different distributions with the added advantage of a reduced model complexity. We are also releasing a JavaScript library 3 and a Chrome Extension Application, to help developers make use of our trained model in web applications (say chat application) and for users to identify and report racially biased contents on the web respectively