Deema Alnuhait
2025
AraTrust: An Evaluation of Trustworthiness for LLMs in Arabic
Emad A. Alghamdi
|
Reem Masoud
|
Deema Alnuhait
|
Afnan Y. Alomairi
|
Ahmed Ashraf
|
Mohamed Zaytoon
Proceedings of the 31st International Conference on Computational Linguistics
The swift progress and widespread acceptance of artificial intelligence (AI) systems highlight a pressing requirement to comprehend both the capabilities and potential risks associated with AI. Given the linguistic complexity, cultural richness, and underrepresented status of Arabic in AI research, there is a pressing need to focus on Large Language Models (LLMs) performance and safety for Arabic related tasks. Despite some progress in their development, there is a lack of comprehensive trustworthiness evaluation benchmarks which presents a major challenge in accurately assessing and improving the safety of LLMs when prompted in Arabic. In this paper, we introduce AraTrust, the first comprehensive trustworthiness benchmark for LLMs in Arabic. AraTrust comprises 522 human-written multiple-choice questions addressing diverse dimensions related to truthfulness, ethics, privacy, illegal activities, mental health, physical health, unfairness, and offensive language. We evaluated a set of LLMs against our benchmark to assess their trustworthiness. GPT-4 was the most trustworthy LLM, while open-source models, particularly AceGPT 7B and Jais 13B, struggled to achieve a score of 60% in our benchmark. The benchmark dataset is publicly available at https://huggingface.co/datasets/asas-ai/AraTrust
2024
CIDAR: Culturally Relevant Instruction Dataset For Arabic
Zaid Alyafeai
|
Khalid Almubarak
|
Ahmed Ashraf
|
Deema Alnuhait
|
Saied Alshahrani
|
Gubran Abdulrahman
|
Gamil Ahmed
|
Qais Gawah
|
Zead Saleh
|
Mustafa Ghaleb
|
Yousef Ali
|
Maged Al-shaibani
Findings of the Association for Computational Linguistics: ACL 2024
Instruction tuning has emerged as a prominent methodology for teaching Large Language Models (LLMs) to follow instructions. However, current instruction datasets predominantly cater to English or are derived from English-dominated LLMs, leading to inherent biases toward Western culture. This bias negatively impacts non-English languages such as Arabic and the unique culture of the Arab region. This paper addresses this limitation by introducing CIDAR, the first open Arabic instruction-tuning dataset culturally aligned by native Arabic speakers. CIDAR contains 10,000 instruction and output pairs that represent the Arab region. We discuss the cultural relevance of CIDAR via the analysis and comparison to a few models fine-tuned on other datasets. Our experiments indicate that models fine-tuned on CIDAR achieve better cultural alignment compared to those fine-tuned on 30x more data.