A Unified Multi task Learning Architecture for Hate Detection Leveraging User-based Information

Prashant Kapil, Asif Ekbal


Abstract
Hate speech, offensive language, aggression, racism, sexism, and other abusive language is a common phenomenon in social media. There is a need for Artificial Intelligence (AI) based intervention which can filter hate content at scale. Most existing hate speech detection solutions have utilized the features by treating each post as an isolated input instance for the classification. This paper addresses this issue by introducing a unique model that improves hate speech identification for the English language by utilising intra-user and inter-user-based information. The experiment is conducted over single-task learning (STL) and multi-task learning (MTL) paradigms that use deep neural networks, such as convolution neural network (CNN), gated recurrent unit (GRU), bidirectional encoder representations from the transformer (BERT), and A Lite BERT (ALBERT). We use three benchmark datasets and conclude that combining certain user features with textual features gives significant improvements in macro-F1 and weightedF1.
Anthology ID:
2023.icon-1.53
Volume:
Proceedings of the 20th International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2023
Address:
Goa University, Goa, India
Editors:
Jyoti D. Pawar, Sobha Lalitha Devi
Venue:
ICON
SIG:
SIGLEX
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
567–573
Language:
URL:
https://aclanthology.org/2023.icon-1.53
DOI:
Bibkey:
Cite (ACL):
Prashant Kapil and Asif Ekbal. 2023. A Unified Multi task Learning Architecture for Hate Detection Leveraging User-based Information. In Proceedings of the 20th International Conference on Natural Language Processing (ICON), pages 567–573, Goa University, Goa, India. NLP Association of India (NLPAI).
Cite (Informal):
A Unified Multi task Learning Architecture for Hate Detection Leveraging User-based Information (Kapil & Ekbal, ICON 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.icon-1.53.pdf