Aylin Naebzadeh
2026
APARSIN: A Multi-Variety Sentiment and Translation Benchmark for Iranic Languages
Sadegh Jafari | Tara Azin | Farhad Roodi | Zahra Dehghani Tafti | Mehrdad Ghadrdan | Elham Vatankhahan Esfahani | Aylin Naebzadeh | Mohammadhadi Shahhosseini | Ghafoor Khan | Kazem Forghani | Danial Namazi | Seyed Mohammad Hossein Hashemi | Farhan Farsi | Mohammad Osoolian | Maede Mohammadi | Mohammad Erfan Zare | Muhammad Hasnain Khan | Muhammad Hussain | Nooreen Zaki | Joma Mohammadi | Shayan Bali | Mohammad Javad Ranjbar | Els Lefever | Veronique Hoste
The Proceedings of the First Workshop on NLP and LLMs for the Iranian Language Family
Sadegh Jafari | Tara Azin | Farhad Roodi | Zahra Dehghani Tafti | Mehrdad Ghadrdan | Elham Vatankhahan Esfahani | Aylin Naebzadeh | Mohammadhadi Shahhosseini | Ghafoor Khan | Kazem Forghani | Danial Namazi | Seyed Mohammad Hossein Hashemi | Farhan Farsi | Mohammad Osoolian | Maede Mohammadi | Mohammad Erfan Zare | Muhammad Hasnain Khan | Muhammad Hussain | Nooreen Zaki | Joma Mohammadi | Shayan Bali | Mohammad Javad Ranjbar | Els Lefever | Veronique Hoste
The Proceedings of the First Workshop on NLP and LLMs for the Iranian Language Family
The Iranic language family includes many underrepresented languages and dialects that remain largely unexplored in modern NLP research. We introduce APARSIN, a multi-variety benchmark covering 14 Iranic languages, dialects, and accents, designed for sentiment analysis and machine translation. The dataset includes both high and low-resource varieties, several of which are endangered, capturing linguistic variation across them. We evaluate a set of instruction-tuned Large Language Models (LLMs) on these tasks and analyze their performance across the varieties. Our results highlight substantial performance gaps between standard Persian and other Iranic languages and dialects, demonstrating the need for more inclusive multilingual and dialectally diverse NLP benchmarks.
2025
GinGer at SemEval-2025 Task 11: Leveraging Fine-Tuned Transformer Models and LoRA for Sentiment Analysis in Low-Resource Languages
Aylin Naebzadeh | Fatemeh Askari
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Aylin Naebzadeh | Fatemeh Askari
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Emotion recognition is a crucial task in natural language processing, particularly in the domain of multi-label emotion classification, where a single text can express multiple emotions with varying intensities. In this work, we participated in Task 11, Track A and Track B of the SemEval-2025 competition, focusing on emotion detection in low-resource languages. Our approach leverages transformer-based models combined with parameter-efficient fine-tuning (PEFT) techniques to effectively address the challenges posed by data scarcity. We specifically applied our method to multiple languages and achieved 9th place in the Arabic Algerian track among 40 competing teams. Our results demonstrate the effectiveness of PEFT in improving emotion recognition performance for low-resource languages. The code for our implementation is publicly available at: https://github.com/AylinNaebzadeh/Text-Based-Emotion-Detection-SemEval-2025.
Search
Fix author
Co-authors
- Fatemeh Askari 1
- Tara Azin 1
- Shayan Bali 1
- Elham Vatankhahan Esfahani 1
- Farhan Farsi 1
- Kazem Forghani 1
- Mehrdad Ghadrdan 1
- Seyed Mohammad Hossein Hashemi 1
- Veronique Hoste 1
- Muhammad Hussain 1
- Sadegh Jafari 1
- Ghafoor Khan 1
- Muhammad Hasnain Khan 1
- Els Lefever 1
- Maede Mohammadi 1
- Joma Mohammadi 1
- Danial Namazi 1
- Mohammad Osoolian 1
- Mohammad Javad Ranjbar 1
- Farhad Roodi 1
- Mohammadhadi Shahhosseini 1
- Zahra Dehghani Tafti 1
- Nooreen Zaki 1
- Mohammad Erfan Zare 1