LLM-Based Synthetic Datasets: Applications and Limitations in Toxicity Detection

Udo Kruschwitz, Maximilian Schmidhuber


Abstract
Large Language Model (LLM)-based Synthetic Data is becoming an increasingly important field of research. One of its promising application is in training classifiers to detect online toxicity, which is of increasing concern in today’s digital landscape. In this work, we assess the feasibility of generative models to generate synthetic data for toxic speech detection. Our experiments are conducted on six different toxicity datasets, four of whom are hateful and two are toxic in the broader sense. We then employ a classifier trained on the original data for filtering. To explore the potential of this data, we conduct experiments using combinations of original and synthetic data, synthetic oversampling of the minority class, and a comparison of original vs. synthetic-only training. Results indicate that while our generative models offer benefits in certain scenarios, it does not improve hateful dataset classification. However, it does boost patronizing and condescending language detection. We find that synthetic data generated by LLMs is a promising avenue of research, but further research is needed to improve the quality of the generated data and develop better filtering methods. Code is available on GitHub; the generated dataset will be available on Zenodo in the final submission.
Anthology ID:
2024.trac-1.6
Volume:
Proceedings of the Fourth Workshop on Threat, Aggression & Cyberbullying @ LREC-COLING-2024
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, Bharathi Raja Chakravarthi, Bornini Lahiri, Siddharth Singh, Shyam Ratan
Venues:
TRAC | WS
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
37–51
Language:
URL:
https://aclanthology.org/2024.trac-1.6
DOI:
Bibkey:
Cite (ACL):
Udo Kruschwitz and Maximilian Schmidhuber. 2024. LLM-Based Synthetic Datasets: Applications and Limitations in Toxicity Detection. In Proceedings of the Fourth Workshop on Threat, Aggression & Cyberbullying @ LREC-COLING-2024, pages 37–51, Torino, Italia. ELRA and ICCL.
Cite (Informal):
LLM-Based Synthetic Datasets: Applications and Limitations in Toxicity Detection (Kruschwitz & Schmidhuber, TRAC-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.trac-1.6.pdf