Ravindra Murumkar
2023
Mavericks at ArAIEval Shared Task: Towards a Safer Digital Space - Transformer Ensemble Models Tackling Deception and Persuasion
Sudeep Mangalvedhekar
|
Kshitij Deshpande
|
Yash Patwardhan
|
Vedant Deshpande
|
Ravindra Murumkar
Proceedings of ArabicNLP 2023
In this paper, we highlight our approach for the “Arabic AI Tasks Evaluation (ArAiEval) Shared Task 2023”. We present our approaches for task 1-A and task 2-A of the shared task which focus on persuasion technique detection and disinformation detection respectively. Detection of persuasion techniques and disinformation has become imperative to avoid distortion of authentic information. The tasks use multigenre snippets of tweets and news articles for the given binary classification problem. We experiment with several transformer-based models that were pre-trained on the Arabic language. We fine-tune these state-of-the-art models on the provided dataset. Ensembling is employed to enhance the performance of the systems. We achieved a micro F1-score of 0.742 on task 1-A (8th rank on the leaderboard) and 0.901 on task 2-A (7th rank on the leaderboard) respectively.
Mavericks at NADI 2023 Shared Task: Unravelling Regional Nuances through Dialect Identification using Transformer-based Approach
Vedant Deshpande
|
Yash Patwardhan
|
Kshitij Deshpande
|
Sudeep Mangalvedhekar
|
Ravindra Murumkar
Proceedings of ArabicNLP 2023
In this paper, we present our approach for the “Nuanced Arabic Dialect Identification (NADI) Shared Task 2023”. We highlight our methodology for subtask 1 which deals with country-level dialect identification. Recognizing dialects plays an instrumental role in enhancing the performance of various downstream NLP tasks such as speech recognition and translation. The task uses the Twitter dataset (TWT-2023) that encompasses 18 dialects for the multi-class classification problem. Numerous transformer-based models, pre-trained on Arabic language, are employed for identifying country-level dialects. We fine-tune these state-of-the-art models on the provided dataset. Ensembling method is leveraged to yield improved performance of the system. We achieved an F1-score of 76.65 (11th rank on leaderboard) on the test dataset.
Search