Christian Widmer
2021
Investigating Annotator Bias in Abusive Language Datasets
Maximilian Wich
|
Christian Widmer
|
Gerhard Hagerer
|
Georg Groh
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)
Nowadays, social media platforms use classification models to cope with hate speech and abusive language. The problem of these models is their vulnerability to bias. A prevalent form of bias in hate speech and abusive language datasets is annotator bias caused by the annotator’s subjective perception and the complexity of the annotation task. In our paper, we develop a set of methods to measure annotator bias in abusive language datasets and to identify different perspectives on abusive language. We apply these methods to four different abusive language datasets. Our proposed approach supports annotation processes of such datasets and future research addressing different perspectives on the perception of abusive language.
End-to-End Annotator Bias Approximation on Crowdsourced Single-Label Sentiment Analysis
Gerhard Hagerer
|
David Szabo
|
Andreas Koch
|
Maria Luisa Ripoll Dominguez
|
Christian Widmer
|
Maximilian Wich
|
Hannah Danner
|
Georg Groh
Proceedings of the 4th International Conference on Natural Language and Speech Processing (ICNLSP 2021)
Search
Co-authors
- Maximilian Wich 2
- Gerhard Hagerer 2
- Georg Groh 2
- David Szabo 1
- Andreas Koch 1
- show all...