Can LLMs Recognize Toxicity? A Structured Investigation Framework and Toxicity Metric

Hyukhun Koh, Dohyung Kim, Minwoo Lee, Kyomin Jung


Abstract
In the pursuit of developing Large Language Models (LLMs) that adhere to societal standards, it is imperative to detect the toxicity in the generated text. The majority of existing toxicity metrics rely on encoder models trained on specific toxicity datasets, which are susceptible to out-of-distribution (OOD) problems and depend on the dataset’s definition of toxicity. In this paper, we introduce a robust metric grounded on LLMs to flexibly measure toxicity according to the given definition. We first analyze the toxicity factors, followed by an examination of the intrinsic toxic attributes of LLMs to ascertain their suitability as evaluators. Finally, we evaluate the performance of our metric with detailed analysis. Our empirical results demonstrate outstanding performance in measuring toxicity within verified factors, improving on conventional metrics by 12 points in the F1 score. Our findings also indicate that upstream toxicity significantly influences downstream metrics, suggesting that LLMs are unsuitable for toxicity evaluations within unverified factors.
Anthology ID:
2024.findings-emnlp.353
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6092–6114
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.353
DOI:
Bibkey:
Cite (ACL):
Hyukhun Koh, Dohyung Kim, Minwoo Lee, and Kyomin Jung. 2024. Can LLMs Recognize Toxicity? A Structured Investigation Framework and Toxicity Metric. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 6092–6114, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Can LLMs Recognize Toxicity? A Structured Investigation Framework and Toxicity Metric (Koh et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.353.pdf