@inproceedings{hegde-shashirekha-2024-transformer,
title = "Transformer-driven Multi-task Learning for Fake and Hateful Content Detection",
author = "Hegde, Asha and
Shashirekha, H L",
editor = "Biradar, Shankar and
Reddy, Kasu Sai Kartheek and
Saumya, Sunil and
Akhtar, Md. Shad",
booktitle = "Proceedings of the 21st International Conference on Natural Language Processing (ICON): Shared Task on Decoding Fake Narratives in Spreading Hateful Stories (Faux-Hate)",
month = dec,
year = "2024",
address = "AU-KBC Research Centre, Chennai, India",
publisher = "NLP Association of India (NLPAI)",
url = "https://aclanthology.org/2024.icon-fauxhate.6/",
pages = "29--35",
abstract = "Social media has revolutionized communica-tion these days in addition to facilitating thespread of fake and hate content. While fakecontent is the manipulation of facts by disin-formation, hate content is textual violence ordiscrimination targeting a group or an individ-ual. Fake narratives have the potential to spreadhate content making people aggressive or hurt-ing the sentiments of an individual or a group.Further, false narratives often dominate discus-sions on sensitive topics, amplifying harmfulmessages contributing to the rise of hate speech.Hence, understanding the relationship betweenhate speech driven by fake narratives is cru-cial in this digital age making it necessary todevelop automatic tools to identify fake andhate content. In this direction, Decoding FakeNarratives in Spreading Hateful Stories (Faux-Hate) - a shared task organized at the Inter-national Conference on Natural Language Pro-cessing (ICON) 2024, invites researchers totackle both fake and hate detection in socialmedia comments, with additional emphasis onidentifying the target and severity of hatefulspeech. The shared task consists of two sub-tasks - Task A (Identifying fake and hate con-tent) and Task B (Identifying the target andseverity of hateful speech). In this paper, we -team MUCS, describe the models proposed toaddress the challenges of this shared task. Wepropose two models: i) Hing{\_}MTL - a Multi-task Learning (MTL) model implemented us-ing pre-trained Hinglish Bidirectional EncoderRepresentations from Transformers (Hinglish-BERT), and ii) Ensemble{\_}MTL - a MTL modelimplemented by ensembling two pre-trainedmodels (HinglishBERT, and Multilingual Dis-tiled version of BERT (MDistilBERT)), to de-tect fake and hate content and identify the targetand severity of hateful speech. Ensemble{\_}MTLmodel outperformed Hing{\_}MTL model withmacro F1 scores of 0.7589 and 0.5746 for TaskA and Task B respectively, securing 6th placein both subtasks."
}
<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="hegde-shashirekha-2024-transformer">
<titleInfo>
<title>Transformer-driven Multi-task Learning for Fake and Hateful Content Detection</title>
</titleInfo>
<name type="personal">
<namePart type="given">Asha</namePart>
<namePart type="family">Hegde</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">H</namePart>
<namePart type="given">L</namePart>
<namePart type="family">Shashirekha</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<originInfo>
<dateIssued>2024-12</dateIssued>
</originInfo>
<typeOfResource>text</typeOfResource>
<relatedItem type="host">
<titleInfo>
<title>Proceedings of the 21st International Conference on Natural Language Processing (ICON): Shared Task on Decoding Fake Narratives in Spreading Hateful Stories (Faux-Hate)</title>
</titleInfo>
<name type="personal">
<namePart type="given">Shankar</namePart>
<namePart type="family">Biradar</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Kasu</namePart>
<namePart type="given">Sai</namePart>
<namePart type="given">Kartheek</namePart>
<namePart type="family">Reddy</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Sunil</namePart>
<namePart type="family">Saumya</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Md.</namePart>
<namePart type="given">Shad</namePart>
<namePart type="family">Akhtar</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<originInfo>
<publisher>NLP Association of India (NLPAI)</publisher>
<place>
<placeTerm type="text">AU-KBC Research Centre, Chennai, India</placeTerm>
</place>
</originInfo>
<genre authority="marcgt">conference publication</genre>
</relatedItem>
<abstract>Social media has revolutionized communica-tion these days in addition to facilitating thespread of fake and hate content. While fakecontent is the manipulation of facts by disin-formation, hate content is textual violence ordiscrimination targeting a group or an individ-ual. Fake narratives have the potential to spreadhate content making people aggressive or hurt-ing the sentiments of an individual or a group.Further, false narratives often dominate discus-sions on sensitive topics, amplifying harmfulmessages contributing to the rise of hate speech.Hence, understanding the relationship betweenhate speech driven by fake narratives is cru-cial in this digital age making it necessary todevelop automatic tools to identify fake andhate content. In this direction, Decoding FakeNarratives in Spreading Hateful Stories (Faux-Hate) - a shared task organized at the Inter-national Conference on Natural Language Pro-cessing (ICON) 2024, invites researchers totackle both fake and hate detection in socialmedia comments, with additional emphasis onidentifying the target and severity of hatefulspeech. The shared task consists of two sub-tasks - Task A (Identifying fake and hate con-tent) and Task B (Identifying the target andseverity of hateful speech). In this paper, we -team MUCS, describe the models proposed toaddress the challenges of this shared task. Wepropose two models: i) Hing_MTL - a Multi-task Learning (MTL) model implemented us-ing pre-trained Hinglish Bidirectional EncoderRepresentations from Transformers (Hinglish-BERT), and ii) Ensemble_MTL - a MTL modelimplemented by ensembling two pre-trainedmodels (HinglishBERT, and Multilingual Dis-tiled version of BERT (MDistilBERT)), to de-tect fake and hate content and identify the targetand severity of hateful speech. Ensemble_MTLmodel outperformed Hing_MTL model withmacro F1 scores of 0.7589 and 0.5746 for TaskA and Task B respectively, securing 6th placein both subtasks.</abstract>
<identifier type="citekey">hegde-shashirekha-2024-transformer</identifier>
<location>
<url>https://aclanthology.org/2024.icon-fauxhate.6/</url>
</location>
<part>
<date>2024-12</date>
<extent unit="page">
<start>29</start>
<end>35</end>
</extent>
</part>
</mods>
</modsCollection>
%0 Conference Proceedings
%T Transformer-driven Multi-task Learning for Fake and Hateful Content Detection
%A Hegde, Asha
%A Shashirekha, H. L.
%Y Biradar, Shankar
%Y Reddy, Kasu Sai Kartheek
%Y Saumya, Sunil
%Y Akhtar, Md. Shad
%S Proceedings of the 21st International Conference on Natural Language Processing (ICON): Shared Task on Decoding Fake Narratives in Spreading Hateful Stories (Faux-Hate)
%D 2024
%8 December
%I NLP Association of India (NLPAI)
%C AU-KBC Research Centre, Chennai, India
%F hegde-shashirekha-2024-transformer
%X Social media has revolutionized communica-tion these days in addition to facilitating thespread of fake and hate content. While fakecontent is the manipulation of facts by disin-formation, hate content is textual violence ordiscrimination targeting a group or an individ-ual. Fake narratives have the potential to spreadhate content making people aggressive or hurt-ing the sentiments of an individual or a group.Further, false narratives often dominate discus-sions on sensitive topics, amplifying harmfulmessages contributing to the rise of hate speech.Hence, understanding the relationship betweenhate speech driven by fake narratives is cru-cial in this digital age making it necessary todevelop automatic tools to identify fake andhate content. In this direction, Decoding FakeNarratives in Spreading Hateful Stories (Faux-Hate) - a shared task organized at the Inter-national Conference on Natural Language Pro-cessing (ICON) 2024, invites researchers totackle both fake and hate detection in socialmedia comments, with additional emphasis onidentifying the target and severity of hatefulspeech. The shared task consists of two sub-tasks - Task A (Identifying fake and hate con-tent) and Task B (Identifying the target andseverity of hateful speech). In this paper, we -team MUCS, describe the models proposed toaddress the challenges of this shared task. Wepropose two models: i) Hing_MTL - a Multi-task Learning (MTL) model implemented us-ing pre-trained Hinglish Bidirectional EncoderRepresentations from Transformers (Hinglish-BERT), and ii) Ensemble_MTL - a MTL modelimplemented by ensembling two pre-trainedmodels (HinglishBERT, and Multilingual Dis-tiled version of BERT (MDistilBERT)), to de-tect fake and hate content and identify the targetand severity of hateful speech. Ensemble_MTLmodel outperformed Hing_MTL model withmacro F1 scores of 0.7589 and 0.5746 for TaskA and Task B respectively, securing 6th placein both subtasks.
%U https://aclanthology.org/2024.icon-fauxhate.6/
%P 29-35
Markdown (Informal)
[Transformer-driven Multi-task Learning for Fake and Hateful Content Detection](https://aclanthology.org/2024.icon-fauxhate.6/) (Hegde & Shashirekha, ICON 2024)
ACL