Hamdan Al-Ali
2026
JEEM: Vision-Language Understanding in Four Arabic Dialects
Karima Kadaoui | Hanin Atwany | Hamdan Al-Ali | Abdelrahman Mohamed | Ali Mekky | Sergei Tilga | Natalia Fedorova | Ekaterina Artemova | Hanan Aldarmaki | Yova Kementchedjhieva
Findings of the Association for Computational Linguistics: EACL 2026
Karima Kadaoui | Hanin Atwany | Hamdan Al-Ali | Abdelrahman Mohamed | Ali Mekky | Sergei Tilga | Natalia Fedorova | Ekaterina Artemova | Hanan Aldarmaki | Yova Kementchedjhieva
Findings of the Association for Computational Linguistics: EACL 2026
We introduce JEEM, a benchmark designed to evaluate Vision-Language Models (VLMs) on visual understanding across four Arabic-speaking countries: Jordan, The Emirates, Egypt, and Morocco. JEEM includes the tasks of image captioning and visual question answering, and features culturally rich and regionally diverse content. This dataset aims to assess the ability of VLMs to generalize across dialects and accurately interpret cultural elements in visual contexts. In an evaluation of five prominent open-source Arabic VLMs and GPT-4o, we find that the Arabic VLMs consistently underperform, struggling with both visual understanding and dialect-specific generation. While GPT-4o ranks best in this comparison, the model’s linguistic competence varies across dialects, and its visual understanding capabilities lag behind. This underscores the need for more inclusive models and the value of culturally-diverse evaluation paradigms.
2025
SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models
Margaret Mitchell | Giuseppe Attanasio | Ioana Baldini | Miruna Clinciu | Jordan Clive | Pieter Delobelle | Manan Dey | Sil Hamilton | Timm Dill | Jad Doughman | Ritam Dutt | Avijit Ghosh | Jessica Zosa Forde | Carolin Holtermann | Lucie-Aimée Kaffee | Tanmay Laud | Anne Lauscher | Roberto L Lopez-Davila | Maraim Masoud | Nikita Nangia | Anaelia Ovalle | Giada Pistilli | Dragomir Radev | Beatrice Savoldi | Vipul Raheja | Jeremy Qin | Esther Ploeger | Arjun Subramonian | Kaustubh Dhole | Kaiser Sun | Amirbek Djanibekov | Jonibek Mansurov | Kayo Yin | Emilio Villa Cueva | Sagnik Mukherjee | Jerry Huang | Xudong Shen | Jay Gala | Hamdan Al-Ali | Tair Djanibekov | Nurdaulet Mukhituly | Shangrui Nie | Shanya Sharma | Karolina Stanczak | Eliza Szczechla | Tiago Timponi Torrent | Deepak Tunuguntla | Marcelo Viridiano | Oskar Van Der Wal | Adina Yakefu | Aurélie Névéol | Mike Zhang | Sydney Zink | Zeerak Talat
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Margaret Mitchell | Giuseppe Attanasio | Ioana Baldini | Miruna Clinciu | Jordan Clive | Pieter Delobelle | Manan Dey | Sil Hamilton | Timm Dill | Jad Doughman | Ritam Dutt | Avijit Ghosh | Jessica Zosa Forde | Carolin Holtermann | Lucie-Aimée Kaffee | Tanmay Laud | Anne Lauscher | Roberto L Lopez-Davila | Maraim Masoud | Nikita Nangia | Anaelia Ovalle | Giada Pistilli | Dragomir Radev | Beatrice Savoldi | Vipul Raheja | Jeremy Qin | Esther Ploeger | Arjun Subramonian | Kaustubh Dhole | Kaiser Sun | Amirbek Djanibekov | Jonibek Mansurov | Kayo Yin | Emilio Villa Cueva | Sagnik Mukherjee | Jerry Huang | Xudong Shen | Jay Gala | Hamdan Al-Ali | Tair Djanibekov | Nurdaulet Mukhituly | Shangrui Nie | Shanya Sharma | Karolina Stanczak | Eliza Szczechla | Tiago Timponi Torrent | Deepak Tunuguntla | Marcelo Viridiano | Oskar Van Der Wal | Adina Yakefu | Aurélie Névéol | Mike Zhang | Sydney Zink | Zeerak Talat
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large Language Models (LLMs) reproduce and exacerbate the social biases present in their training data, and resources to quantify this issue are limited. While research has attempted to identify and mitigate such biases, most efforts have been concentrated around English, lagging the rapid advancement of LLMs in multilingual settings. In this paper, we introduce a new multilingual parallel dataset SHADES to help address this issue, designed for examining culturally-specific stereotypes that may be learned by LLMs. The dataset includes stereotypes from 20 regions around the world and 16 languages, spanning multiple identity categories subject to discrimination worldwide. We demonstrate its utility in a series of exploratory evaluations for both “base” and “instruction-tuned” language models. Our results suggest that stereotypes are consistently reflected across models and languages, with some languages and models indicating much stronger stereotype biases than others.
Search
Fix author
Co-authors
- Hanan Aldarmaki 1
- Ekaterina Artemova 1
- Giuseppe Attanasio 1
- Hanin Atwany 1
- Ioana Baldini 1
- Miruna Clinciu 1
- Jordan Clive 1
- Pieter Delobelle 1
- Manan Dey 1
- Kaustubh Dhole 1
- Timm Dill 1
- Amirbek Djanibekov 1
- Jad Doughman 1
- Ritam Dutt 1
- Natalia Fedorova 1
- Jessica Zosa Forde 1
- Jay Gala 1
- Avijit Ghosh 1
- Sil Hamilton 1
- Carolin Holtermann 1
- Jerry Huang 1
- Karima Kadaoui 1
- Lucie-Aimée Kaffee 1
- Yova Kementchedjhieva 1
- Tanmay Laud 1
- Anne Lauscher 1
- Roberto L Lopez-Davila 1
- Jonibek Mansurov 1
- Maraim Masoud 1
- Ali Mekky 1
- Margaret Mitchell 1
- Abdelrahman Mohamed 1
- Sagnik Mukherjee 1
- Nurdaulet Mukhituly 1
- Nikita Nangia 1
- Aurelie Neveol 1
- Shangrui Nie 1
- Anaelia Ovalle 1
- Giada Pistilli 1
- Esther Ploeger 1
- Jeremy Qin 1
- Dragomir Radev 1
- Vipul Raheja 1
- Beatrice Savoldi 1
- Shanya Sharma 1
- Xudong Shen 1
- Karolina Stanczak 1
- Arjun Subramonian 1
- Kaiser Sun 1
- Eliza Szczechla 1
- Tair Djanibekov 1
- Zeerak Talat 1
- Sergei Tilga 1
- Tiago Timponi Torrent 1
- Deepak Tunuguntla 1
- Oskar Van Der Wal 1
- Emilio Villa-Cueva 1
- Marcelo Viridiano 1
- Adina Yakefu 1
- Kayo Yin 1
- Mike Zhang 1
- Sydney Zink 1