Explainable Abuse Detection as Intent Classification and Slot Filling

Agostina Calabrese, Björn Ross, Mirella Lapata


Abstract
To proactively offer social media users a safe online experience, there is a need for systems that can detect harmful posts and promptly alert platform moderators. In order to guarantee the enforcement of a consistent policy, moderators are provided with detailed guidelines. In contrast, most state-of-the-art models learn what abuse is from labeled examples and as a result base their predictions on spurious cues, such as the presence of group identifiers, which can be unreliable. In this work we introduce the concept of policy-aware abuse detection, abandoning the unrealistic expectation that systems can reliably learn which phenomena constitute abuse from inspecting the data alone. We propose a machine-friendly representation of the policy that moderators wish to enforce, by breaking it down into a collection of intents and slots. We collect and annotate a dataset of 3,535 English posts with such slots, and show how architectures for intent classification and slot filling can be used for abuse detection, while providing a rationale for model decisions.1
Anthology ID:
2022.tacl-1.82
Volume:
Transactions of the Association for Computational Linguistics, Volume 10
Month:
Year:
2022
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
1440–1454
Language:
URL:
https://aclanthology.org/2022.tacl-1.82
DOI:
10.1162/tacl_a_00527
Bibkey:
Cite (ACL):
Agostina Calabrese, Björn Ross, and Mirella Lapata. 2022. Explainable Abuse Detection as Intent Classification and Slot Filling. Transactions of the Association for Computational Linguistics, 10:1440–1454.
Cite (Informal):
Explainable Abuse Detection as Intent Classification and Slot Filling (Calabrese et al., TACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.tacl-1.82.pdf