Revisiting Implicitly Abusive Language Detection: Evaluating LLMs in Zero-Shot and Few-Shot Settings

Julia Jaremko, Dagmar Gromann, Michael Wiegand


Abstract
Implicitly abusive language (IAL), unlike its explicit counterpart, lacks overt slurs or unambiguously offensive keywords, such as “bimbo” or “scum”, making it challenging to detect and mitigate. While current research predominantly focuses on explicitly abusive language, the subtler and more covert forms of IAL remain insufficiently studied. The rapid advancement and widespread adoption of large language models (LLMs) have opened new possibilities for various NLP tasks, but their application to IAL detection has been limited. We revisit three very recent challenging datasets of IAL and investigate the potential of LLMs to enhance the detection of IAL in English through zero-shot and few-shot prompting approaches. We evaluate the models’ capabilities in classifying sentences directly as either IAL or benign, and in extracting linguistic features associated with IAL. Our results indicate that classifiers trained on features extracted by advanced LLMs outperform the best previously reported results, achieving near-human performance.
Anthology ID:
2025.coling-main.262
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3879–3898
Language:
URL:
https://aclanthology.org/2025.coling-main.262/
DOI:
Bibkey:
Cite (ACL):
Julia Jaremko, Dagmar Gromann, and Michael Wiegand. 2025. Revisiting Implicitly Abusive Language Detection: Evaluating LLMs in Zero-Shot and Few-Shot Settings. In Proceedings of the 31st International Conference on Computational Linguistics, pages 3879–3898, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Revisiting Implicitly Abusive Language Detection: Evaluating LLMs in Zero-Shot and Few-Shot Settings (Jaremko et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.262.pdf