MGKM at StanceEval2024 Fine-Tuning Large Language Models for Arabic Stance Detection

Mamoun Alghaslan, Khaled Almutairy


Abstract
Social media platforms have become essential in daily life, enabling users to express their opinions and stances on various topics. Stance detection, which identifies the viewpoint expressed in text toward a target, has predominantly focused on English. MAWQIF is the pioneering Arabic dataset for target-specific stance detection, consisting of 4,121 tweets annotated with stance, sentiment, and sarcasm. The original dataset, benchmarked on four BERT-based models, achieved a best macro-F1 score of 78.89, indicating significant room for improvement. This study evaluates the effectiveness of three Large Language Models (LLMs) in detecting target-specific stances in MAWQIF. The LLMs assessed are ChatGPT-3.5-turbo, Meta-Llama-3-8B-Instruct, and Falcon-7B-Instruct. Performance was measured using both zero-shot and full fine-tuning approaches. Our findings demonstrate that fine-tuning substantially enhances the stance detection capabilities of LLMs in Arabic tweets. Notably, GPT-3.5-Turbo achieved the highest performance with a macro-F1 score of 82.93, underscoring the potential of fine-tuned LLMs for language-specific applications.
Anthology ID:
2024.arabicnlp-1.95
Volume:
Proceedings of The Second Arabic Natural Language Processing Conference
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Nizar Habash, Houda Bouamor, Ramy Eskander, Nadi Tomeh, Ibrahim Abu Farha, Ahmed Abdelali, Samia Touileb, Injy Hamed, Yaser Onaizan, Bashar Alhafni, Wissam Antoun, Salam Khalifa, Hatem Haddad, Imed Zitouni, Badr AlKhamissi, Rawan Almatham, Khalil Mrini
Venues:
ArabicNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
816–822
Language:
URL:
https://aclanthology.org/2024.arabicnlp-1.95
DOI:
10.18653/v1/2024.arabicnlp-1.95
Bibkey:
Cite (ACL):
Mamoun Alghaslan and Khaled Almutairy. 2024. MGKM at StanceEval2024 Fine-Tuning Large Language Models for Arabic Stance Detection. In Proceedings of The Second Arabic Natural Language Processing Conference, pages 816–822, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
MGKM at StanceEval2024 Fine-Tuning Large Language Models for Arabic Stance Detection (Alghaslan & Almutairy, ArabicNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.arabicnlp-1.95.pdf