Transformer-driven Multi-task Learning for Fake and Hateful Content Detection

Asha Hegde, H L Shashirekha


Abstract
Social media has revolutionized communica-tion these days in addition to facilitating thespread of fake and hate content. While fakecontent is the manipulation of facts by disin-formation, hate content is textual violence ordiscrimination targeting a group or an individ-ual. Fake narratives have the potential to spreadhate content making people aggressive or hurt-ing the sentiments of an individual or a group.Further, false narratives often dominate discus-sions on sensitive topics, amplifying harmfulmessages contributing to the rise of hate speech.Hence, understanding the relationship betweenhate speech driven by fake narratives is cru-cial in this digital age making it necessary todevelop automatic tools to identify fake andhate content. In this direction, Decoding FakeNarratives in Spreading Hateful Stories (Faux-Hate) - a shared task organized at the Inter-national Conference on Natural Language Pro-cessing (ICON) 2024, invites researchers totackle both fake and hate detection in socialmedia comments, with additional emphasis onidentifying the target and severity of hatefulspeech. The shared task consists of two sub-tasks - Task A (Identifying fake and hate con-tent) and Task B (Identifying the target andseverity of hateful speech). In this paper, we -team MUCS, describe the models proposed toaddress the challenges of this shared task. Wepropose two models: i) Hing_MTL - a Multi-task Learning (MTL) model implemented us-ing pre-trained Hinglish Bidirectional EncoderRepresentations from Transformers (Hinglish-BERT), and ii) Ensemble_MTL - a MTL modelimplemented by ensembling two pre-trainedmodels (HinglishBERT, and Multilingual Dis-tiled version of BERT (MDistilBERT)), to de-tect fake and hate content and identify the targetand severity of hateful speech. Ensemble_MTLmodel outperformed Hing_MTL model withmacro F1 scores of 0.7589 and 0.5746 for TaskA and Task B respectively, securing 6th placein both subtasks.
Anthology ID:
2024.icon-fauxhate.6
Volume:
Proceedings of the 21st International Conference on Natural Language Processing (ICON): Shared Task on Decoding Fake Narratives in Spreading Hateful Stories (Faux-Hate)
Month:
December
Year:
2024
Address:
AU-KBC Research Centre, Chennai, India
Editors:
Shankar Biradar, Kasu Sai Kartheek Reddy, Sunil Saumya, Md. Shad Akhtar
Venue:
ICON
SIG:
SIGLEX
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
29–35
Language:
URL:
https://aclanthology.org/2024.icon-fauxhate.6/
DOI:
Bibkey:
Cite (ACL):
Asha Hegde and H L Shashirekha. 2024. Transformer-driven Multi-task Learning for Fake and Hateful Content Detection. In Proceedings of the 21st International Conference on Natural Language Processing (ICON): Shared Task on Decoding Fake Narratives in Spreading Hateful Stories (Faux-Hate), pages 29–35, AU-KBC Research Centre, Chennai, India. NLP Association of India (NLPAI).
Cite (Informal):
Transformer-driven Multi-task Learning for Fake and Hateful Content Detection (Hegde & Shashirekha, ICON 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.icon-fauxhate.6.pdf