Elmoatez Billah Nagoudi
Also published as:
ElMoatez Billah Nagoudi
2023
pdf
bib
abs
ORCA: A Challenging Benchmark for Arabic Language Understanding
AbdelRahim Elmadany
|
ElMoatez Billah Nagoudi
|
Muhammad Abdul-Mageed
Findings of the Association for Computational Linguistics: ACL 2023
Due to the crucial role pretrained language models play in modern NLP, several benchmarks have been proposed to evaluate their performance. In spite of these efforts, no public benchmark of diverse nature currently exists for evaluating Arabic NLU. This makes it challenging to measure progress for both Arabic and multilingual language models. This challenge is compounded by the fact that any benchmark targeting Arabic needs to take into account the fact that Arabic is not a single language but rather a collection of languages and language varieties. In this work, we introduce a publicly available benchmark for Arabic language understanding evaluation dubbed ORCA. It is carefully constructed to cover diverse Arabic varieties and a wide range of challenging Arabic understanding tasks exploiting 60 different datasets (across seven NLU task clusters). To measure current progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between 18 multilingual and Arabic language models. We also provide a public leaderboard with a unified single-number evaluation metric (ORCA score) to facilitate future research.
pdf
bib
ProMap: Effective Bilingual Lexicon Induction via Language Model Prompting
Abdellah El Mekki
|
Muhammad Abdul-Mageed
|
ElMoatez Billah Nagoudi
|
Ismail Berrada
|
Ahmed Khoumsi
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
pdf
bib
abs
SIDLR: Slot and Intent Detection Models for Low-Resource Language Varieties
Sang Yun Kwon
|
Gagan Bhatia
|
Elmoatez Billah Nagoudi
|
Alcides Alcoba Inciarte
|
Muhammad Abdul-mageed
Tenth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2023)
Intent detection and slot filling are two critical tasks in spoken and natural language understandingfor task-oriented dialog systems. In this work, we describe our participation in slot and intent detection for low-resource language varieties (SID4LR) (Aepli et al., 2023). We investigate the slot and intent detection (SID) tasks using a wide range of models and settings. Given the recent success of multitask promptedfinetuning of the large language models, we also test the generalization capability of the recent encoder-decoder model mT0 (Muennighoff et al., 2022) on new tasks (i.e., SID) in languages they have never intentionally seen. We show that our best model outperforms the baseline by a large margin (up to +30 F1 points) in both SID tasks.