Gerben Timmerman
2021
DALC: the Dutch Abusive Language Corpus
Tommaso Caselli
|
Arjan Schelhaas
|
Marieke Weultjes
|
Folkert Leistra
|
Hylke van der Veen
|
Gerben Timmerman
|
Malvina Nissim
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)
As socially unacceptable language become pervasive in social media platforms, the need for automatic content moderation become more pressing. This contribution introduces the Dutch Abusive Language Corpus (DALC v1.0), a new dataset with tweets manually an- notated for abusive language. The resource ad- dress a gap in language resources for Dutch and adopts a multi-layer annotation scheme modeling the explicitness and the target of the abusive messages. Baselines experiments on all annotation layers have been conducted, achieving a macro F1 score of 0.748 for binary classification of the explicitness layer and .489 for target classification.
2019
Grunn2019 at SemEval-2019 Task 5: Shared Task on Multilingual Detection of Hate
Mike Zhang
|
Roy David
|
Leon Graumans
|
Gerben Timmerman
Proceedings of the 13th International Workshop on Semantic Evaluation
Hate speech occurs more often than ever and polarizes society. To help counter this polarization, SemEval 2019 organizes a shared task called the Multilingual Detection of Hate. The first task (A) is to decide whether a given tweet contains hate against immigrants or women, in a multilingual perspective, for English and Spanish. In the second task (B), the system is also asked to classify the following sub-tasks: hateful tweets as aggressive or not aggressive, and to identify the target harassed as individual or generic. We evaluate multiple models, and finally combine them in an ensemble setting. This ensemble setting is built of five and three submodels for the English and Spanish task respectively. In the current setup it shows that using a bigger ensemble for English tweets performs mediocre, while a slightly smaller ensemble does work well for detecting hate speech in Spanish tweets. Our results on the test set for English show 0.378 macro F1 on task A and 0.553 macro F1 on task B. For Spanish the results are significantly higher, 0.701 macro F1 on task A and 0.734 macro F1 for task B.
Search
Co-authors
- Mike Zhang 1
- Roy David 1
- Leon Graumans 1
- Tommaso Caselli 1
- Arjan Schelhaas 1
- show all...