Frank at ArAIEval Shared Task: Arabic Persuasion and Disinformation: The Power of Pretrained Models

Dilshod Azizov, Jiyong Li, Shangsong Liang


Abstract
In this work, we present our systems developed for “ArAIEval” shared task of ArabicNLP 2023 (CITATION). We used an mBERT transformer for Subtask 1A, which targets persuasion in Arabic tweets, and we used the MARBERT transformer for Subtask 2A to identify disinformation in Arabic tweets. Our persuasion detection system achieved micro-F1 of 0.745 by surpassing the baseline by 13.2%, and registered a macro-F1 of 0.717 based on leaderboard scores. Similarly, our disinformation system recorded a micro-F1 of 0.816, besting the naïve majority by 6.7%, with a macro-F1 of 0.637. Furthermore, we present our preliminary results on a variety of pre-trained models. In terms of overall ranking, our systems placed 7th out of 16 and 12th out of 17 teams for Subtasks 1A and 2A, respectively.
Anthology ID:
2023.arabicnlp-1.59
Volume:
Proceedings of ArabicNLP 2023
Month:
December
Year:
2023
Address:
Singapore (Hybrid)
Editors:
Hassan Sawaf, Samhaa El-Beltagy, Wajdi Zaghouani, Walid Magdy, Ahmed Abdelali, Nadi Tomeh, Ibrahim Abu Farha, Nizar Habash, Salam Khalifa, Amr Keleg, Hatem Haddad, Imed Zitouni, Khalil Mrini, Rawan Almatham
Venues:
ArabicNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
583–588
Language:
URL:
https://aclanthology.org/2023.arabicnlp-1.59
DOI:
10.18653/v1/2023.arabicnlp-1.59
Bibkey:
Cite (ACL):
Dilshod Azizov, Jiyong Li, and Shangsong Liang. 2023. Frank at ArAIEval Shared Task: Arabic Persuasion and Disinformation: The Power of Pretrained Models. In Proceedings of ArabicNLP 2023, pages 583–588, Singapore (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Frank at ArAIEval Shared Task: Arabic Persuasion and Disinformation: The Power of Pretrained Models (Azizov et al., ArabicNLP-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.arabicnlp-1.59.pdf