Marzia Nouri
2026
MEENA (PersianMMMU): Multimodal-Multilingual Educational Exams for N-level Assessment
Omid Ghahroodi | Arshia Hemmat | Marzia Nouri | Seyed Mohammad Hadi Hosseini | Doratossadat Dastgheib | Mohammad Vali Sanian | Alireza Sahebi | Reihaneh Zohrabi | Mohammad Hossein Rohban | Ehsaneddin Asgari | Mahdieh Soleymani Baghshah
Findings of the Association for Computational Linguistics: EACL 2026
Omid Ghahroodi | Arshia Hemmat | Marzia Nouri | Seyed Mohammad Hadi Hosseini | Doratossadat Dastgheib | Mohammad Vali Sanian | Alireza Sahebi | Reihaneh Zohrabi | Mohammad Hossein Rohban | Ehsaneddin Asgari | Mahdieh Soleymani Baghshah
Findings of the Association for Computational Linguistics: EACL 2026
Recent advancements in large vision-language models (VLMs) have primarily focused on English, with limited attention given to other languages. To address this gap, we introduce MEENA (also known as PersianMMMU), the first dataset designed to evaluate Persian VLMs across scientific, reasoning, and human-level understanding tasks. Our dataset comprises approximately 7,500 Persian and 3,000 English questions, covering a wide range of topics such as reasoning, mathematics, physics, diagrams, charts, and Persian art and literature. Key features of MEENA include: (1) diverse subject coverage spanning various educational levels, from primary to upper secondary school, (2) rich metadata, including difficulty levels and descriptive answers, (3) original Persian data that preserves cultural nuances, (4) a bilingual structure to assess cross-linguistic performance, and (5) a series of diverse experiments assessing various capabilities, including overall performance, the model’s ability to attend to images, and its tendency to generate hallucinations. We hope this benchmark contributes to enhancing VLM capabilities beyond English.
2024
Latent Concept-based Explanation of NLP Models
Xuemin Yu | Fahim Dalvi | Nadir Durrani | Marzia Nouri | Hassan Sajjad
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Xuemin Yu | Fahim Dalvi | Nadir Durrani | Marzia Nouri | Hassan Sajjad
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Interpreting and understanding the predictions made by deep learning models poses a formidable challenge due to their inherently opaque nature. Many previous efforts aimed at explaining these predictions rely on input features, specifically, the words within NLP models. However, such explanations are often less informative due to the discrete nature of these words and their lack of contextual verbosity. To address this limitation, we introduce the Latent Concept Attribution method (LACOAT), which generates explanations for predictions based on latent concepts. Our foundational intuition is that a word can exhibit multiple facets, contingent upon the context in which it is used. Therefore, given a word in context, the latent space derived from our training process reflects a specific facet of that word. LACOAT functions by mapping the representations of salient input words into the training latent space, allowing it to provide latent context-based explanations of the prediction.
2023
The Language Model, Resources, and Computational Pipelines for the Under-Resourced Iranian Azerbaijani
Marzia Nouri | Mahsa Amani | Reihaneh Zohrabi | Ehsaneddin Asgari
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Marzia Nouri | Mahsa Amani | Reihaneh Zohrabi | Ehsaneddin Asgari
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)