LoRA adapter weight tuning with multi-task learning for Faux-Hate detection

Abhinandan Onajol, Varun Gani, Praneeta Marakatti, Bhakti Malwankar, Shankar Biradar


Abstract
Detecting misinformation and harmful language in bilingual texts, particularly those com-bining Hindi and English, poses considerabledifficulties. The intricacies of mixed-languagecontent and limited available resources compli-cate this task even more. The proposed workfocuses on unraveling deceptive stories thatpropagate hate. We have developed an inno-vative attention-weight-tuned LoRA Adopter-based model for such Faux-Hate content de-tection. This work is conducted as a partof the ICON 2024 shared task on DecodingFake narratives in spreading Hateful stories.The LoRA-enhanced architecture secured 13thplace among the participating teams for TaskA.
Anthology ID:
2024.icon-fauxhate.11
Volume:
Proceedings of the 21st International Conference on Natural Language Processing (ICON): Shared Task on Decoding Fake Narratives in Spreading Hateful Stories (Faux-Hate)
Month:
December
Year:
2024
Address:
AU-KBC Research Centre, Chennai, India
Editors:
Shankar Biradar, Kasu Sai Kartheek Reddy, Sunil Saumya, Md. Shad Akhtar
Venue:
ICON
SIG:
SIGLEX
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
56–60
Language:
URL:
https://aclanthology.org/2024.icon-fauxhate.11/
DOI:
Bibkey:
Cite (ACL):
Abhinandan Onajol, Varun Gani, Praneeta Marakatti, Bhakti Malwankar, and Shankar Biradar. 2024. LoRA adapter weight tuning with multi-task learning for Faux-Hate detection. In Proceedings of the 21st International Conference on Natural Language Processing (ICON): Shared Task on Decoding Fake Narratives in Spreading Hateful Stories (Faux-Hate), pages 56–60, AU-KBC Research Centre, Chennai, India. NLP Association of India (NLPAI).
Cite (Informal):
LoRA adapter weight tuning with multi-task learning for Faux-Hate detection (Onajol et al., ICON 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.icon-fauxhate.11.pdf