Mohammad Erfan Zare


2026

The Iranic language family includes many underrepresented languages and dialects that remain largely unexplored in modern NLP research. We introduce APARSIN, a multi-variety benchmark covering 14 Iranic languages, dialects, and accents, designed for sentiment analysis and machine translation. The dataset includes both high and low-resource varieties, several of which are endangered, capturing linguistic variation across them. We evaluate a set of instruction-tuned Large Language Models (LLMs) on these tasks and analyze their performance across the varieties. Our results highlight substantial performance gaps between standard Persian and other Iranic languages and dialects, demonstrating the need for more inclusive multilingual and dialectally diverse NLP benchmarks.

2025

Mental health disorders such as stress, anxiety, and depression are increasingly prevalent globally, yet access to care remains limited due to barriers like geographic isolation, financial constraints, and stigma. Conversational agents or chatbots have emerged as viable digital tools for personalized mental health support. This paper presents the development of a psychological health chatbot designed specifically for Persian-speaking individuals, offering a culturally sensitive tool for emotion detection and disorder identification. The chatbot integrates several advanced natural language processing (NLP) modules, leveraging the ArmanEmo dataset to identify emotions, assess psychological states, and ensure safe, appropriate responses. Our evaluation of various models, including ParsBERT and XLM-RoBERTa, demonstrates effective emotion detection with accuracy up to 75.39%. Additionally, the system incorporates a Large Language Model (LLM) to generate messages. This chatbot serves as a promising solution for addressing the accessibility gap in mental health care and provides a scalable, language-inclusive platform for psychological support.
This paper explores multilingual emotion classification across binary classification, intensity estimation, and cross-lingual detection tasks. To address linguistic variability and limited annotated data, we evaluate various deep learning approaches, including transformer-based embeddings and traditional classifiers. After extensive experimentation, language-specific embedding models were selected as the final approach, given their superior ability to capture linguistic and cultural nuances. Experiments on high- and low-resource languages demonstrate that this method significantly improves performance, achieving competitive macro-average F1 scores. Notably, in languages such as Tigrinya and Kinyarwanda for cross-lingual detection task, our approach achieved a second-place ranking, driven by the incorporation of advanced preprocessing techniques. Despite these advances, challenges remain due to limited annotated data in underrepresented languages and the complexity of nuanced emotional expressions. The study highlights the need for robust, language-aware emotion recognition systems and emphasizes future directions, including expanding multilingual datasets and refining models.