ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors

Zhexin Zhang, Yida Lu, Jingyuan Ma, Di Zhang, Rui Li, Pei Ke, Hao Sun, Lei Sha, Zhifang Sui, Hongning Wang, Minlie Huang


Abstract
The safety of Large Language Models (LLMs) has gained increasing attention in recent years, but there still lacks a comprehensive approach for detecting safety issues within LLMs’ responses in an aligned, customizable and explainable manner. In this paper, we propose ShieldLM, an LLM-based safety detector, which aligns with common safety standards, supports customizable detection rules, and provides explanations for its decisions. To train ShieldLM, we compile a large bilingual dataset comprising 14,387 query-response pairs, annotating the safety of responses based on various safety standards. Through extensive experiments, we demonstrate that ShieldLM surpasses strong baselines across four test sets, showcasing remarkable customizability and explainability. Besides performing well on standard detection datasets, ShieldLM has also been shown to be effective as a safety evaluator for advanced LLMs. ShieldLM is released at https://github.com/thu-coai/ShieldLM to support accurate and explainable safety detection under various safety standards.
Anthology ID:
2024.findings-emnlp.610
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10420–10438
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.610/
DOI:
10.18653/v1/2024.findings-emnlp.610
Bibkey:
Cite (ACL):
Zhexin Zhang, Yida Lu, Jingyuan Ma, Di Zhang, Rui Li, Pei Ke, Hao Sun, Lei Sha, Zhifang Sui, Hongning Wang, and Minlie Huang. 2024. ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 10420–10438, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors (Zhang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.610.pdf
Software:
 2024.findings-emnlp.610.software.zip
Data:
 2024.findings-emnlp.610.data.zip