ArabicSense: A Benchmark for Evaluating Commonsense Reasoning in Arabic with Large Language Models

Salima Lamsiyah, Kamyar Zeinalipour, Samir El amrany, Matthias Brust, Marco Maggini, Pascal Bouvry, Christoph Schommer


Abstract
Recent efforts in natural language processing (NLP) commonsense reasoning research have led to the development of numerous new datasets and benchmarks. However, these resources have predominantly been limited to English, leaving a gap in evaluating commonsense reasoning in other languages. In this paper, we introduce the ArabicSense Benchmark, which is designed to thoroughly evaluate the world-knowledge commonsense reasoning abilities of large language models (LLMs) in Arabic. This benchmark includes three main tasks: first, it tests whether a system can distinguish between natural language statements that make sense and those that do not; second, it requires a system to identify the most crucial reason why a nonsensical statement fails to make sense; and third, it involves generating explanations for why statements do not make sense. We evaluate several Arabic BERT-based models and causal LLMs on these tasks. Experimental results demonstrate improvements after fine-tuning on our dataset. For instance, AraBERT v2 achieved an 87% F1 score on the second task, while Gemma and Mistral-7b achieved F1 scores of 95.5% and 94.8%, respectively. For the generation task, LLaMA-3 achieved the best performance with a BERTScore F1 of 77.3%, closely followed by Mistral-7b at 77.1%. All codes and the benchmark will be made publicly available at https://github.com/.
Anthology ID:
2025.wacl-1.1
Volume:
Proceedings of the 4th Workshop on Arabic Corpus Linguistics (WACL-4)
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Saad Ezzini, Hamza Alami, Ismail Berrada, Abdessamad Benlahbib, Abdelkader El Mahdaouy, Salima Lamsiyah, Hatim Derrouz, Amal Haddad Haddad, Mustafa Jarrar, Mo El-Haj, Ruslan Mitkov, Paul Rayson
Venues:
WACL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–11
Language:
URL:
https://aclanthology.org/2025.wacl-1.1/
DOI:
Bibkey:
Cite (ACL):
Salima Lamsiyah, Kamyar Zeinalipour, Samir El amrany, Matthias Brust, Marco Maggini, Pascal Bouvry, and Christoph Schommer. 2025. ArabicSense: A Benchmark for Evaluating Commonsense Reasoning in Arabic with Large Language Models. In Proceedings of the 4th Workshop on Arabic Corpus Linguistics (WACL-4), pages 1–11, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
ArabicSense: A Benchmark for Evaluating Commonsense Reasoning in Arabic with Large Language Models (Lamsiyah et al., WACL 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.wacl-1.1.pdf