Aymen Ben Elhaj Mabrouk


2021

pdf bib
Introducing A large Tunisian Arabizi Dialectal Dataset for Sentiment Analysis
Chayma Fourati | Hatem Haddad | Abir Messaoudi | Moez BenHajhmida | Aymen Ben Elhaj Mabrouk | Malek Naski
Proceedings of the Sixth Arabic Natural Language Processing Workshop

On various Social Media platforms, people, tend to use the informal way to communicate, or write posts and comments: their local dialects. In Africa, more than 1500 dialects and languages exist. Particularly, Tunisians talk and write informally using Latin letters and numbers rather than Arabic ones. In this paper, we introduce a large common-crawl-based Tunisian Arabizi dialectal dataset dedicated for Sentiment Analysis. The dataset consists of a total of 100k comments (about movies, politic, sport, etc.) annotated manually by Tunisian native speakers as Positive, negative and Neutral. We evaluate our dataset on sentiment analysis task using the Bidirectional Encoder Representations from Transformers (BERT) as a contextual language model in its multilingual version (mBERT) as an embedding technique then combining mBERT with Convolutional Neural Network (CNN) as classifier. The dataset is publicly available.

pdf bib
iCompass at Shared Task on Sarcasm and Sentiment Detection in Arabic
Malek Naski | Abir Messaoudi | Hatem Haddad | Moez BenHajhmida | Chayma Fourati | Aymen Ben Elhaj Mabrouk
Proceedings of the Sixth Arabic Natural Language Processing Workshop

We describe our submitted system to the 2021 Shared Task on Sarcasm and Sentiment Detection in Arabic (Abu Farha et al., 2021). We tackled both subtasks, namely Sarcasm Detection (Subtask 1) and Sentiment Analysis (Subtask 2). We used state-of-the-art pretrained contextualized text representation models and fine-tuned them according to the downstream task in hand. As a first approach, we used Google’s multilingual BERT and then other Arabic variants: AraBERT, ARBERT and MARBERT. The results found show that MARBERT outperforms all of the previously mentioned models overall, either on Subtask 1 or Subtask 2.