Yasser Alhabashi
2025
Towards Inclusive Arabic LLMs: A Culturally Aligned Benchmark in Arabic Large Language Model Evaluation
Omer Nacar
|
Serry Taiseer Sibaee
|
Samar Ahmed
|
Safa Ben Atitallah
|
Adel Ammar
|
Yasser Alhabashi
|
Abdulrahman S. Al-Batati
|
Arwa Alsehibani
|
Nour Qandos
|
Omar Elshehy
|
Mohamed Abdelkader
|
Anis Koubaa
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Arabic Large Language Models are usually evaluated using Western-centric benchmarks that overlook essential cultural contexts, making them less effective and culturally misaligned for Arabic-speaking communities. This study addresses this gap by evaluating the Arabic Massive Multitask Language Understanding (MMLU) Benchmark to assess its cultural alignment and relevance for Arabic Large Language Models (LLMs) across culturally sensitive topics. A team of eleven experts annotated over 2,500 questions, evaluating them based on fluency, adequacy, cultural appropriateness, bias detection, religious sensitivity, and adherence to social norms. Through human assessment, the study highlights significant cultural misalignments and biases, particularly in sensitive areas like religion and morality. In response to these findings, we propose annotation guidelines and integrate culturally enriched data sources to enhance the benchmark’s reliability and relevance. The research highlights the importance of cultural sensitivity in evaluating inclusive Arabic LLMs, fostering more widely accepted LLMs for Arabic-speaking communities.
2024
ASOS at ArAIEval Shared Task: Integrating Text and Image Embeddings for Multimodal Propaganda Detection in Arabic Memes
Yasser Alhabashi
|
Abdullah Alharbi
|
Samar Ahmad
|
Serry Sibaee
|
Omer Nacar
|
Lahouari Ghouti
|
Anis Koubaa
Proceedings of The Second Arabic Natural Language Processing Conference
This paper describes our participation in the ArAIEval Shared Task 2024, focusing on Task 2C, which challenges participants to detect propagandistic elements in multimodal Arabic memes. The challenge involves analyzing both the textual and visual components of memes to identify underlying propagandistic messages. Our approach integrates the capabilities of MARBERT and ResNet50, top-performing pre-trained models for text and image processing, respectively. Our system architecture combines these models through a fusion layer that integrates and processes the extracted features, creating a comprehensive representation that is more effective in detecting nuanced propaganda. Our proposed system achieved significant success, placing second with an F1 score of 0.7987.