Dennis Assenmacher
2024
Multilingual Bot Accusations: How Different Linguistic Contexts Shape Perceptions of Social Bots
Leon Fröhling
|
Xiaofei Li
|
Dennis Assenmacher
Proceedings of the 4th Workshop on Computational Linguistics for the Political and Social Sciences: Long and short papers
Recent research indicates that the online use of the term ”bot” has evolved over time. In the past, people used the term to accuse others of displaying automated behavior. However, it has gradually transformed into a linguistic tool to dehumanize the conversation partner, particularly on polarizing topics. Although this trend has been observed in English-speaking contexts, it is still unclear whether it holds true in other socio-linguistic environments. In this work we extend existing work on bot accusations and explore the phenomenon in a multilingual setting. We identify three distinct accusation patterns that characterize the different languages.
2023
People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Indira Sen
|
Dennis Assenmacher
|
Mattia Samory
|
Isabelle Augenstein
|
Wil Aalst
|
Claudia Wagner
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
NLP models are used in a variety of critical social computing tasks, such as detecting sexist, racist, or otherwise hateful content. Therefore, it is imperative that these models are robust to spurious features. Past work has attempted to tackle such spurious features using training data augmentation, including Counterfactually Augmented Data (CADs). CADs introduce minimal changes to existing training data points and flip their labels; training on them may reduce model dependency on spurious features. However, manually generating CADs can be time-consuming and expensive. Hence in this work, we assess if this task can be automated using generative NLP models. We automatically generate CADs using Polyjuice, ChatGPT, and Flan-T5, and evaluate their usefulness in improving model robustness compared to manually-generated CADs. By testing both model performance on multiple out-of-domain test sets and individual data point efficacy, our results show that while manual CADs are still the most effective, CADs generated by ChatGPT come a close second. One key reason for the lower performance of automated methods is that the changes they introduce are often insufficient to flip the original label.
Search
Co-authors
- Indira Sen 1
- Mattia Samory 1
- Isabelle Augenstein 1
- Wil Aalst 1
- Claudia Wagner 1
- show all...