Nathaniel R. Robinson
2026
AMIYA Shared Task: Arabic Modeling In Your Accent at VarDial 2026
Nathaniel R. Robinson | Shahd Abdelmoneim | Anjali Kantharuban | Otba Alsboul | Salima Lamsiyah | Kelly Marchisio | Kenton Murray
Proceedings of the 13th Workshop on NLP for Similar Languages, Varieties and Dialects
Nathaniel R. Robinson | Shahd Abdelmoneim | Anjali Kantharuban | Otba Alsboul | Salima Lamsiyah | Kelly Marchisio | Kenton Murray
Proceedings of the 13th Workshop on NLP for Similar Languages, Varieties and Dialects
Arabic, often considered a single language, actually describes a wide variety of sometimes mutually unintelligible language varieties. While large language models (LLMs) have revolutionized natural language processing (NLP) with rapid advances, these models still best serve speakers of high-resource and standard language varieties. One particular deficiency of theirs is in dialectal Arabic. We present the first ever shared task for dialectal Arabic language modeling: Arabic Modeling In Your Accent, or AMIYA. The goal of the shared task was to develop LLMs that could (1) respond in the correct dialectal variety when explicitly or implicitly prompted to, (2) translate between dialectal Arabic and standard Arabic or English, (3) adhere to LLM instructions in dialectal Arabic, and (4) produce fluent Arabic outputs. We called for submissions in the dialectal varieties of five countries: Morocco, Egypt, Palestine, Syria, and Saudi Arabia. We received 45 submitted systems from six participating teams. We saw positive results from supervised fine-tuning on a translation objective, and reinforcement learning to improve dialectness. Manual evaluation also showed that some systems had learned to output dialectal words or phrases, but at the expense of actual fluency or coherence. Overall the most effective system involved continual pre-training and supervised fine-tuning of 12 candidate LLMs, followed by selection of the best performing models.
2024
Kreyòl-MT: Building MT for Latin American, Caribbean and Colonial African Creole Languages
Nathaniel R. Robinson | Raj Dabre | Ammon Shurtz | Rasul Dent | Onenamiyi Onesi | Claire Bizon Monroc | Loïc Grobol | Hasan Muhammad | Ashi Garg | Naome A. Etori | Vijay Murari Tiyyala | Olanrewaju Samuel | Matthew Dean Stutzman | Bismarck Bamfo Odoom | Sanjeev Khudanpur | Stephen D. Richardson | Kenton Murray
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Nathaniel R. Robinson | Raj Dabre | Ammon Shurtz | Rasul Dent | Onenamiyi Onesi | Claire Bizon Monroc | Loïc Grobol | Hasan Muhammad | Ashi Garg | Naome A. Etori | Vijay Murari Tiyyala | Olanrewaju Samuel | Matthew Dean Stutzman | Bismarck Bamfo Odoom | Sanjeev Khudanpur | Stephen D. Richardson | Kenton Murray
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
A majority of language technologies are tailored for a small number of high-resource languages, while relatively many low-resource languages are neglected. One such group, Creole languages, have long been marginalized in academic study, though their speakers could benefit from machine translation (MT). These languages are predominantly used in much of Latin America, Africa and the Caribbean. We present the largest cumulative dataset to date for Creole language MT, including 14.5M unique Creole sentences with parallel translations—11.6M of which we release publicly, and the largest bitexts gathered to date for 41 languages—the first ever for 21. In addition, we provide MT models supporting all 41 Creole languages in 172 translation directions. Given our diverse dataset, we produce a model for Creole language MT exposed to more genre diversity then ever before, which outperforms a genre-specific Creole MT model on its own benchmark for 23 of 34 translation directions.