Norah A. Alzahrani
Also published as: Norah Alzahrani
2025
AraEval: An Arabic Multi-Task Evaluation Suite for Large Language Models
Alhanoof Althnian | Norah A. Alzahrani | Shaykhah Z. Alsubaie | Eman Albilali | Ahmed Abdelali | Nouf M. Alotaibi | M Saiful Bari | Yazeed Alnumay | Abdulhamed Alothaimen | Maryam Saif | Shahad D. Alzaidi | Faisal Abdulrahman Mirza | Yousef Almushayqih | Mohammed Al Saleem | Ghadah Alabduljabbar | Abdulmohsen Al-Thubaity | Areeb Alowisheq | Nora Al-Twairesh
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Alhanoof Althnian | Norah A. Alzahrani | Shaykhah Z. Alsubaie | Eman Albilali | Ahmed Abdelali | Nouf M. Alotaibi | M Saiful Bari | Yazeed Alnumay | Abdulhamed Alothaimen | Maryam Saif | Shahad D. Alzaidi | Faisal Abdulrahman Mirza | Yousef Almushayqih | Mohammed Al Saleem | Ghadah Alabduljabbar | Abdulmohsen Al-Thubaity | Areeb Alowisheq | Nora Al-Twairesh
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
The rapid advancements of Large Language models (LLMs) necessitate robust benchmarks. In this paper, we present AraEval, a pioneering and comprehensive evaluation suite specifically developed to assess the advanced knowledge, reasoning, truthfulness, and instruction- following capabilities of foundation models in the Arabic context. AraEval includes a diverse set of evaluation tasks that test various dimensions of knowledge and reasoning, with a total of 24,378 samples. These tasks cover areas such as linguistic understanding, factual recall, logical inference, commonsense reasoning, mathematical problem-solving, and domain-specific expertise, ensuring that the evaluation goes beyond basic language comprehension. It covers multiple domains of knowledge, such as science, history, religion, and literature, ensuring that the LLMs are tested on a broad spectrum of topics relevant to Arabic-speaking contexts. AraEval is designed to facilitate comparisons across different foundation models, enabling LLM developers and users to benchmark perfor- mance effectively. In addition, it provides diagnostic insights to identify specific areas where models excel or struggle, guiding further development. AraEval datasets can be found at https://huggingface.co/collections/humain-ai/araeval-datasets-687760e04b12a7afb429a4a0.
BALSAM: A Platform for Benchmarking Arabic Large Language Models
Rawan Al-Matham | Kareem Darwish | Raghad Al-Rasheed | Waad Alshammari | Muneera Alhoshan | Amal Almazrua | Asma Al Wazrah | Mais Alheraki | Firoj Alam | Preslav Nakov | Norah Alzahrani | Eman AlBilali | Nizar Habash | Abdelrahman El-Sheikh | Muhammad Elmallah | Haonan Li | Hamdy Mubarak | Mohamed Anwar | Zaid Alyafeai | Ahmed Abdelali | Nora Altwairesh | Maram Hasanain | Abdulmohsen Al Thubaity | Shady Shehata | Bashar Alhafni | Injy Hamed | Go Inoue | Khalid Elmadani | Ossama Obeid | Fatima Haouari | Tamer Elsayed | Emad Alghamdi | Khalid Almubarak | Saied Alshahrani | Ola Aljarrah | Safa Alajlan | Areej Alshaqarawi | Maryam Alshihri | Sultana Alghurabi | Atikah Alzeghayer | Afrah Altamimi | Abdullah Alfaifi | Abdulrahman AlOsaimy
Proceedings of The Third Arabic Natural Language Processing Conference
Rawan Al-Matham | Kareem Darwish | Raghad Al-Rasheed | Waad Alshammari | Muneera Alhoshan | Amal Almazrua | Asma Al Wazrah | Mais Alheraki | Firoj Alam | Preslav Nakov | Norah Alzahrani | Eman AlBilali | Nizar Habash | Abdelrahman El-Sheikh | Muhammad Elmallah | Haonan Li | Hamdy Mubarak | Mohamed Anwar | Zaid Alyafeai | Ahmed Abdelali | Nora Altwairesh | Maram Hasanain | Abdulmohsen Al Thubaity | Shady Shehata | Bashar Alhafni | Injy Hamed | Go Inoue | Khalid Elmadani | Ossama Obeid | Fatima Haouari | Tamer Elsayed | Emad Alghamdi | Khalid Almubarak | Saied Alshahrani | Ola Aljarrah | Safa Alajlan | Areej Alshaqarawi | Maryam Alshihri | Sultana Alghurabi | Atikah Alzeghayer | Afrah Altamimi | Abdullah Alfaifi | Abdulrahman AlOsaimy
Proceedings of The Third Arabic Natural Language Processing Conference
The impressive advancement of Large Language Models (LLMs) in English has not been matched across all languages. In particular, LLM performance in Arabic lags behind, due to data scarcity, linguistic diversity of Arabic and its dialects, morphological complexity, etc. Progress is further hindered by the quality of Arabic benchmarks, which typically rely on static, publicly available data, lack comprehensive task coverage, or do not provide dedicated platforms with blind test sets. This makes it challenging to measure actual progress and to mitigate data contamination. Here, we aim to bridge these gaps. In particular, we introduce BALSAM, a comprehensive, community-driven benchmark aimed at advancing Arabic LLM development and evaluation. It includes 78 NLP tasks from 14 broad categories, with 52K examples divided into 37K test and 15K development, and a centralized, transparent platform for blind evaluation. We envision BALSAM as a unifying platform that sets standards and promotes collaborative research to advance Arabic LLM capabilities.
LC-Eval: A Bilingual Multi-Task Evaluation Benchmark for Long-Context Understanding
Sheikh Jubair | Arwa Omayrah | Amal Alshammari | Alhanoof Althnian | Abdulhamed Alothaimen | Norah A. Alzahrani | Shahad D. Alzaidi | Nora Al-Twairesh | Abdulmohsen Al-Thubaity
Findings of the Association for Computational Linguistics: EMNLP 2025
Sheikh Jubair | Arwa Omayrah | Amal Alshammari | Alhanoof Althnian | Abdulhamed Alothaimen | Norah A. Alzahrani | Shahad D. Alzaidi | Nora Al-Twairesh | Abdulmohsen Al-Thubaity
Findings of the Association for Computational Linguistics: EMNLP 2025
Recent advancements in Large Language Models (LLMs) have demonstrated sophisticated capabilities, including the ability to process and comprehend extended contexts. These emergent capabilities necessitate rigorous evaluation methods to effectively assess their performance in long-context understanding. In this paper, we present LC-Eval, a bilingual, multi-task evaluation benchmark designed to evaluate long-context understanding in English and Arabic, targeting context lengths ranging from 4k to over 128k tokens. LC-Eval introduces four novel and challenging tasks: multi-document question answering, bilingual question answering, claim verification within a paragraph, and multiple-choice questions based on long contexts. These tasks are designed to assess LLMs’ abilities in deep reasoning, document comprehension, information tracing, and bilingual information extraction and understanding. The benchmark includes datasets in both Arabic and English for each task, allowing for a comparative analysis of their performance across different text genres. Evaluations were conducted on both open-weight and closed LLMs, with results indicating that LC-Eval presents significant challenges. Even high-performing models, such as GPT-4o, struggled with certain tasks, highlighting the complexity and rigor of the benchmark.
2024
When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards
Norah Alzahrani | Hisham Alyahya | Yazeed Alnumay | Sultan AlRashed | Shaykhah Alsubaie | Yousef Almushayqih | Faisal Mirza | Nouf Alotaibi | Nora Al-Twairesh | Areeb Alowisheq | M Saiful Bari | Haidar Khan
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Norah Alzahrani | Hisham Alyahya | Yazeed Alnumay | Sultan AlRashed | Shaykhah Alsubaie | Yousef Almushayqih | Faisal Mirza | Nouf Alotaibi | Nora Al-Twairesh | Areeb Alowisheq | M Saiful Bari | Haidar Khan
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large Language Model (LLM) leaderboards based on benchmark rankings are regularly used to guide practitioners in model selection. Often, the published leaderboard rankings are taken at face value — we show this is a (potentially costly) mistake. Under existing leaderboards, the relative performance of LLMs is highly sensitive to (often minute) details. We show that for popular multiple-choice question benchmarks (e.g., MMLU), minor perturbations to the benchmark, such as changing the order of choices or the method of answer selection, result in changes in rankings up to 8 positions. We explain this phenomenon by conducting systematic experiments over three broad categories of benchmark perturbations and identifying the sources of this behavior. Our analysis results in several best-practice recommendations, including the advantage of a *hybrid* scoring method for answer selection. Our study highlights the dangers of relying on simple benchmark evaluations and charts the path for more robust evaluation schemes on the existing benchmarks. The code for this paper is available at [https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness](https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness).
Search
Fix author
Co-authors
- Abdulmohsen Al-Thubaity 3
- Nora Al-Twairesh 3
- Ahmed Abdelali 2
- Eman Albilali 2
- Yousef Almushayqih 2
- Yazeed Alnumay 2
- Nouf M. Alotaibi 2
- Abdulhamed Alothaimen 2
- Areeb Alowisheq 2
- Alhanoof Althnian 2
- Shahad D. Alzaidi 2
- M Saiful Bari 2
- Mohammed Al Saleem 1
- Asma Al Wazrah 1
- Rawan Al-Matham 1
- Raghad Al-Rasheed 1
- Abdulrahman AlOsaimy 1
- Sultan AlRashed 1
- Ghadah Alabduljabbar 1
- Safa Alajlan 1
- Firoj Alam 1
- Abdullah Alfaifi 1
- Emad Alghamdi 1
- Sultana Alghurabi 1
- Bashar Alhafni 1
- Mais Alheraki 1
- Muneera Alhoshan 1
- Ola Aljarrah 1
- Amal Almazrua 1
- Khalid Almubarak 1
- Saied Alshahrani 1
- Waad Thuwaini Alshammari 1
- Amal Alshammari 1
- Areej Alshaqarawi 1
- Maryam Alshihri 1
- Shaykhah Alsubaie 1
- Shaykhah Z. Alsubaie 1
- Afrah Altamimi 1
- Nora Altwairesh 1
- Zaid Alyafeai 1
- Hisham Alyahya 1
- Atikah Alzeghayer 1
- Mohamed Anwar 1
- Kareem Darwish 1
- Abdelrahman El-Sheikh 1
- Khalid Elmadani 1
- Muhammad Elmallah 1
- Tamer Elsayed 1
- Nizar Habash 1
- Injy Hamed 1
- Fatima Haouari 1
- Maram Hasanain 1
- Go Inoue 1
- Sheikh Jubair 1
- Haidar Khan 1
- Haonan Li 1
- Faisal Mirza 1
- Faisal Abdulrahman Mirza 1
- Hamdy Mubarak 1
- Preslav Nakov 1
- Ossama Obeid 1
- Arwa Omayrah 1
- Maryam Saif 1
- Shady Shehata 1