Anna Planitzer
2024
AustroTox: A Dataset for Target-Based Austrian German Offensive Language Detection
Pia Pachinger
|
Janis Goldzycher
|
Anna Planitzer
|
Wojciech Kusa
|
Allan Hanbury
|
Julia Neidhardt
Findings of the Association for Computational Linguistics: ACL 2024
Model interpretability in toxicity detection greatly profits from token-level annotations. However, currently, such annotations are only available in English. We introduce a dataset annotated for offensive language detection sourced from a news forum, notable for its incorporation of the Austrian German dialect, comprising 4,562 user comments. In addition to binary offensiveness classification, we identify spans within each comment constituting vulgar language or representing targets of offensive statements. We evaluate fine-tuned Transformer models as well as large language models in a zero- and few-shot fashion. The results indicate that while fine-tuned models excel in detecting linguistic peculiarities such as vulgar dialect, large language models demonstrate superior performance in detecting offensiveness in AustroTox.
2023
Toward Disambiguating the Definitions of Abusive, Offensive, Toxic, and Uncivil Comments
Pia Pachinger
|
Allan Hanbury
|
Julia Neidhardt
|
Anna Planitzer
Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP)
The definitions of abusive, offensive, toxic and uncivil comments used for annotating corpora for automated content moderation are highly intersected and researchers call for their disambiguation. We summarize the definitions of these terms as they appear in 23 papers across different fields. We compare examples given for uncivil, offensive, and toxic comments, attempting to foster more unified scientific resources. Additionally, we stress that the term incivility that frequently appears in social science literature has hardly been mentioned in the literature we analyzed that focuses on computational linguistics and natural language processing.
Search