Shahriar Shariati Motlagh


2025

pdf bib
Persian in a Court: Benchmarking VLMs In Persian Multi-Modal Tasks
Farhan Farsi | Shahriar Shariati Motlagh | Shayan Bali | Sadra Sabouri | Saeedeh Momtazi
Proceedings of the First Workshop of Evaluation of Multi-Modal Generation

This study introduces a novel framework for evaluating Large Language Models (LLMs) and Vision-Language Models (VLMs) in Persian, a low-resource language. We develop comprehensive datasets to assess reasoning, linguistic understanding, and multimodal capabilities. Our datasets include Persian-OCR-QA for optical character recognition, Persian-VQA for visual question answering, Persian world-image puzzle for multimodal integration, Visual-Abstraction-Reasoning for abstract reasoning, and Iran-places for visual knowledge of Iranian figures and locations. We evaluate models like GPT-4o, Claude 3.5 Sonnet, and Llama 3.2 90B Vision, revealing their strengths and weaknesses in processing Persian. This research contributes to inclusive language processing by addressing the unique challenges of low-resource language evaluation.