Don’t Augment, Rewrite? Assessing Abusive Language Detection with Synthetic Data

Camilla Casula, Elisa Leonardelli, Sara Tonelli


Abstract
Research on abusive language detection and content moderation is crucial to combat online harm. However, current limitations set by regulatory bodies and social media platforms can make it difficult to share collected data. We address this challenge by exploring the possibility to replace existing datasets in English for abusive language detection with synthetic data obtained by rewriting original texts with an instruction-based generative model.We show that such data can be effectively used to train a classifier whose performance is in line, and sometimes better, than a classifier trained on original data. Training with synthetic data also seems to improve robustness in a cross-dataset setting. A manual inspection of the generated data confirms that rewriting makes it impossible to retrieve the original texts online.
Anthology ID:
2024.findings-acl.669
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11240–11247
Language:
URL:
https://aclanthology.org/2024.findings-acl.669
DOI:
10.18653/v1/2024.findings-acl.669
Bibkey:
Cite (ACL):
Camilla Casula, Elisa Leonardelli, and Sara Tonelli. 2024. Don’t Augment, Rewrite? Assessing Abusive Language Detection with Synthetic Data. In Findings of the Association for Computational Linguistics: ACL 2024, pages 11240–11247, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Don’t Augment, Rewrite? Assessing Abusive Language Detection with Synthetic Data (Casula et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.669.pdf