Comparative Studies of Detecting Abusive Language on Twitter

Younghun Lee, Seunghyun Yoon, Kyomin Jung


Abstract
The context-dependent nature of online aggression makes annotating large collections of data extremely difficult. Previously studied datasets in abusive language detection have been insufficient in size to efficiently train deep learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much greater in size and reliability, has been released. However, this dataset has not been comprehensively studied to its potential. In this paper, we conduct the first comparative study of various learning models on Hate and Abusive Speech on Twitter, and discuss the possibility of using additional features and context data for improvements. Experimental results show that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, is the most accurate model scoring 0.805 F1.
Anthology ID:
W18-5113
Volume:
Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
Month:
October
Year:
2018
Address:
Brussels, Belgium
Editors:
Darja Fišer, Ruihong Huang, Vinodkumar Prabhakaran, Rob Voigt, Zeerak Waseem, Jacqueline Wernimont
Venue:
ALW
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
101–106
Language:
URL:
https://aclanthology.org/W18-5113
DOI:
10.18653/v1/W18-5113
Bibkey:
Cite (ACL):
Younghun Lee, Seunghyun Yoon, and Kyomin Jung. 2018. Comparative Studies of Detecting Abusive Language on Twitter. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 101–106, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Comparative Studies of Detecting Abusive Language on Twitter (Lee et al., ALW 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5113.pdf
Code
 younggns/comparative-abusive-lang +  additional community code