Abusive Language Detection using Syntactic Dependency Graphs

Kanika Narang, Chris Brew


Abstract
Automated detection of abusive language online has become imperative. Current sequential models (LSTM) do not work well for long and complex sentences while bi-transformer models (BERT) are not computationally efficient for the task. We show that classifiers based on syntactic structure of the text, dependency graphical convolutional networks (DepGCNs) can achieve state-of-the-art performance on abusive language datasets. The overall performance is at par with of strong baselines such as fine-tuned BERT. Further, our GCN-based approach is much more efficient than BERT at inference time making it suitable for real-time detection.
Anthology ID:
2020.alw-1.6
Volume:
Proceedings of the Fourth Workshop on Online Abuse and Harms
Month:
November
Year:
2020
Address:
Online
Editors:
Seyi Akiwowo, Bertie Vidgen, Vinodkumar Prabhakaran, Zeerak Waseem
Venue:
ALW
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
44–53
Language:
URL:
https://aclanthology.org/2020.alw-1.6
DOI:
10.18653/v1/2020.alw-1.6
Bibkey:
Cite (ACL):
Kanika Narang and Chris Brew. 2020. Abusive Language Detection using Syntactic Dependency Graphs. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 44–53, Online. Association for Computational Linguistics.
Cite (Informal):
Abusive Language Detection using Syntactic Dependency Graphs (Narang & Brew, ALW 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.alw-1.6.pdf
Video:
 https://slideslive.com/38939532